About the job
About Enode
As the energy landscape evolves, the transition towards renewable energy sources is crucial yet progressing slowly. Enode is at the forefront of this transformation, delivering innovative and interconnected solutions.
Enode’s advanced platform empowers a new generation of sustainable energy applications by optimizing the connectivity of energy devices through our robust APIs. From electric vehicles to heat pumps, we facilitate flexible energy consumption that emphasizes renewable resources.
Collaborating with top-tier energy corporations and green technology pioneers, we provide immediate access to over 250 million consumers, enabling them to engage in more sustainable energy practices.
In light of the pressing need for decarbonization, Enode is more crucial than ever, backed by prominent investors such as Y Combinator, Lowercarbon Capital, and Creandum.
We are just beginning our journey and are on the lookout for innovative builders to join our mission-driven and passionate team.
What We Are Looking For:
We are seeking a Data Engineer with a passion for data science to take ownership of and advance Enode’s data platform while collaborating on forecasting and analytics for our Flex product. Your role will ensure that as we grow, our data remains reliable, accessible, and trustworthy across Flex delivery, billing, customer insights, and executive reporting.
Working closely with our Data Scientist, Håkon Kongelf, you will influence the delivery of Flex and the next phase of our data platform. You will also engage with engineers across product teams, as well as commercial teams and leadership, to enhance data access and insights.
Your contributions will be significant: as part of a small team, your decisions will shape our approach to data across the organization.
In this role, you will:
Deliver Flex-critical data needs: Design and maintain dependable pipelines and datasets that support Flex models (e.g., demand/availability signals, aggregations, monitoring).
Evolve the data platform: Evaluate existing architectures and drive practical enhancements in architecture, tooling, and operational practices.
Own data quality and trust: Establish testing, lineage definitions, and safeguards (e.g., dbt tests, anomaly detection, freshness checks) to ensure stakeholders can rely on outputs.
Enable self-serve analytics: Develop well-structured datasets that allow teams to independently extract insights.
