Engineering Manager - Data Platform
Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Experience Level
Manager
Qualifications
About Canonical
Canonical is at the forefront of providing innovative data solutions that empower enterprises to operate efficiently across diverse environments. Our commitment to quality and excellence drives us to develop cutting-edge technologies that redefine the data landscape. By fostering a culture of collaboration and continuous improvement, we aim to attract top talent to help us achieve our vision of becoming the leading provider of data solutions worldwide.
Similar jobs
Browse all companies, explore by city & role, or SEO search pages.
Search for Junior Data Engineer
123,073 results
Canonical
Entry-Level Data EngineerJoin our innovative team at Canonical where your data analytics and data mining talents will play a pivotal role in shaping the future of marketing technology. We seek individuals who are intrigued by technology adoption trends, value visitor data privacy, and appreciate the power of open source in marketing. If you are a marketing d…
Prompt
Job Title: Senior Data EngineerAt Prompt, we are passionate about leveraging the power of data to enhance healthcare outcomes. We are on the lookout for a highly skilled Senior Data Engineer to spearhead our innovative data initiatives. Your expertise will be pivotal in transforming complex data into actionable insights that inform strategic decision-making.Prompt is a trailblazing startup dedicated to providing cutting-edge, automated B2B software solutions for rehabilitation therapy businesses. Our platform is not just a tool; it’s an integral part of the teams and patients we serve, helping them achieve remarkable results. As we continue to expand our market presence, our commitment to delivering software that users love remains unwavering.As we scale our operations, data is increasingly at the heart of our decision-making processes. We seek a Senior Data Engineer who can collaborate effectively across teams, focusing on developing and maintaining robust data pipelines and performing sophisticated data transformations. A strong aptitude for hypothesis generation, data modeling, and analytical analysis is crucial for guiding both client and internal decision-making. This role will contribute to the creation of impactful data products, enhancing outcomes for our clients and their patients, while also driving efficiency within our Product, Client Success, Growth, and Operations teams.This is a high-impact opportunity where you will take ownership of building iterative systems, working closely with various stakeholders to ensure our data products deliver substantial value.ResponsibilitiesChampion Prompt’s mission to revolutionize healthcare through actionable, high-quality data that enhances decision-making for clients and internally.Lead the design, development, and continuous improvement of intricate data systems — from raw data ingestion through to transformation, modeling, and downstream analysis.Conduct exploratory analyses and hypothesis-driven investigations to extract meaningful insights that inform both client-facing and internal strategies.Utilize advanced AI and LLM-powered tools (e.g., code assistants, agents, automation frameworks) to expedite data transformation and analysis while ensuring reliability, security, and maintainability.Create and refine well-structured datasets, complex transformations, and performance metrics.Design and implement stringent data quality checks, continuous monitoring, and observability to guarantee data correctness, freshness, and trustworthiness.Collaborate with stakeholders to comprehend client and internal workflows, establish success metrics, and drive the execution of our data strategy.
Zencore
Join Zencore, a rapidly expanding company founded by ex-Google Cloud leaders, as a Senior Data Engineer. You'll partner with innovative businesses, guiding them in their transition to Google Cloud while tackling data modernization challenges. Your expertise in ETL/ELT, Data Warehousing, Distributed Data Processing, and Streaming Data will be essential as you collaborate closely with client teams to drive their data-driven transformations.
Canonical
Canonical, a trailblazer in open source software and operating systems, is on the hunt for passionate Junior Software Engineers to join our thriving Ubuntu Engineering team. As a globally recognized platform, Ubuntu empowers groundbreaking initiatives across public cloud, data science, AI, engineering, and IoT. Our vast clientele includes top-tier public cloud and silicon providers, as well as industry leaders across diverse sectors. Embracing a culture of global collaboration, we boast a team of over 1,200 professionals in more than 75 countries, with minimal office-based roles. Our engineering teams convene two to four times a year in exciting locations worldwide to strategize and align on key objectives. Join us as we continue to grow and innovate in a founder-led, profitable environment. We are seeking enthusiastic junior engineers who prioritize quality, performance, and resilience in software. Ideal candidates will thrive in designing and engineering new software, while also being adept at packaging, integrating, testing, and delivering an impressive array of open source software available in the Ubuntu repositories. You will play a crucial role in integrating the latest open source solutions, ensuring robust upgrade paths, and shaping the future of Ubuntu. Our engineering community embodies a rich diversity of experience, from students and hobbyists to seasoned professionals in corporate and academic settings. You will have the opportunity to work across four key teams that contribute to Ubuntu's success: Foundations, Server, Desktop, and Debcrafters, each with unique responsibilities ranging from maintaining foundational software to innovating for the future of our desktop and server distributions. Bring your existing skills and embrace new learning opportunities while contributing to one of the most influential open source projects in the world.
Join Binance, the world's leading blockchain ecosystem, as a Senior Data Engineer. You will play a crucial role in enhancing our data infrastructure, ensuring efficient processing and analysis of vast amounts of data. Collaborate with cross-functional teams to design robust data pipelines, optimize data flow, and contribute to innovative solutions that drive our digital asset offerings. Your expertise will be essential in maintaining our competitive edge in the cryptocurrency market.
Alt is revolutionizing the landscape of alternative assets, beginning with the booming $5 billion trading-card market. We empower collectors to buy, sell, store, and finance their cards all in one platform, supported by influential leaders at Stripe, Coinbase, Seven Seven Six, and renowned athletes like Tom Brady and Giannis Antetokounmpo. Our next big goal is to implement real-time pricing at scale—the Alt Value that drives every transaction, loan, and product on our platform.The RoleAre you a passionate data engineer excited about creating robust data pipelines and tackling intricate data challenges? In this position, you will oversee Alt's vital data infrastructure that underpins our pricing model and marketplace insights by effectively ingesting transaction and listing data from a variety of external marketplaces. Your primary responsibility will be to ensure that our data pipelines deliver timely, accurate information that informs pricing strategies, market analytics, and business intelligence across the platform.What You Will Do HereDesign, optimize, and manage data pipelines that scrape, process, and ingest transaction and listing data from major auction houses and marketplaces.Develop comprehensive monitoring and alert systems to track latency, uptime, and coverage metrics across all data sources.Continuously enhance our data infrastructure by modernizing storage and processing technologies, minimizing manual interventions, and optimizing for cost, performance, and reliability.Collaborate with internal teams to comprehend data usage patterns and ensure pipelines deliver clean, standardized data that meets product specifications.This Position is Ideal for You If You...Value data quality and recognize that extraction is just the beginning—the real value lies in delivering clean and usable data.Flourish in a startup environment where your contributions have immediate impact and you can take ownership of critical systems from start to finish.Prefer a step-by-step approach to enhancements, favoring evidence-based decisions over complete system replacements.Are intellectually curious about data flow within systems and eager to explore automation opportunities.Desire to work at the intersection of data engineering and product, understanding how your pipelines directly influence business outcomes.
Ekumen Labs
Join our dynamic team at Ekumen Labs as a Machine Learning Data Engineer. In this role, you will leverage your expertise in machine learning and data engineering to design and implement robust data pipelines that facilitate the deployment of machine learning models. Your contributions will be crucial in transforming raw data into valuable insights that drive business decisions and enhance the efficiency of our operations.
Join oxio as a Staff Data Platform EngineerAt OXIO, we are pioneering the telecom-as-a-service (TaaS) revolution, empowering brands and enterprises to create and manage proprietary mobile networks tailored to their specific customer needs. Our innovative TaaS solution consolidates multiple networks into a single, cloud-managed platform, enhancing accessibility and operational efficiency for businesses. With comprehensive network access, we provide unparalleled business intelligence that enables enterprises to gain deeper insights into customer and machine behavior. By emphasizing continuous innovation, OXIO allows any company to establish a robust telecom presence while extracting unique customer insights like never before.About the Role:We are on the lookout for an experienced Staff Data Engineer to spearhead the design, development, and scaling of our advanced data platform. This position is perfect for individuals who excel at creating and architecting resilient data systems. You will play a key role in defining our data infrastructure, establishing governance frameworks, and developing scalable APIs that facilitate both real-time and batch analytics.In this capacity, you will be instrumental in assessing, designing, and transitioning our organization toward a visionary data architecture—a forward-thinking foundation that ensures scalable, secure, and intelligent data operations throughout the enterprise.This position encompasses a diverse array of analytics applications across telecom networking, product intelligence, financial reporting, and insights from both internal and external data. You will contribute to the development of a comprehensive Customer 360 platform, leveraging machine learning models and behavioral data to enable advanced functionalities such as fraud detection, brand intelligence, and personalized customer interactions.Key Responsibilities:Architect, construct, and enhance a unified data platform that integrates both internal and external data sources into data lakes and warehouses.Design and execute streaming and batch data pipelines utilizing tools such as Spark, Airflow, and DBT.Lead infrastructure deployment using Terraform and Kubernetes, ensuring that our deployments are both scalable and secure.Evaluate and guide the migration to our visionary data architecture, aligning platform capabilities with the long-term strategic objectives of the business.Collaborate with cross-functional teams to establish logging standards and data governance practices.
#PoweringYourIngenuity At Ekumenlabs, our mission is to connect world-class technology companies with exceptional engineering talent from every corner of the globe. With a strong presence in LATAM, the USA, and Europe, we empower businesses by assembling remote engineering teams tailored to each project's unique requirements.Our teams are driven by a passion for technology and are eager to tackle challenges head-on. We value technical expertise along with a commitment to continuous learning. Each development is customized to meet project needs, and our culture encourages the exploration of new languages, tools, and frameworks. Our software engineering teams prioritize best coding practices to ensure that our system designs and developments are readable, reusable, and scalable.We are currently seeking a talented and motivated Data Engineer to join our dynamic team. In this role, you will contribute to maintaining and enhancing a critical data ingestion pipeline that supports essential decision-making processes and ensures our teams work with reliable, high-quality data.You will collaborate closely with data scientists, engineers, and various stakeholders, ensuring smooth data flow, enhancing observability, and continuously seeking ways to improve efficiency.
Join AB InBev as a Data Engineer within our innovative BEES Personalization team. As a key player in our digital transformation journey, you will implement robust ETL/ELT solutions, design scalable data pipelines, and ensure data security while collaborating across teams to drive insights and efficiency. Your contributions will directly enhance our customer experiences and business profitability.
GitLab is a pioneering open-core software company that offers an extensive AI-driven DevSecOps Platform, embraced by over 100,000 organizations globally. Our mission is to empower every individual to contribute to and collaboratively shape the software that drives our world forward. By fostering an environment where everyone can contribute, we transform consumers into contributors, significantly propelling human advancement. Our platform seamlessly integrates teams and organizations, dismantling obstacles and redefining possibilities within software development. With innovative products such as Duo Enterprise and Duo Agent Platform, our customers leverage AI benefits throughout every stage of the Software Development Life Cycle (SDLC).The principles embedded in our products are mirrored in our team dynamics: we embrace AI as a fundamental productivity enhancer, encouraging all team members to integrate AI into their daily workflows to enhance efficiency, innovation, and impact. At GitLab, we cultivate an environment where careers flourish, innovation thrives, and every voice is acknowledged. Our high-performance culture is guided by our values and a commitment to continuous knowledge sharing, enabling team members to unlock their full potential while collaborating with industry leaders to tackle complex challenges. Join us in co-creating the future as we revolutionize how the world develops software.Position OverviewAs a Lead Principal Database Engineer, you will play a pivotal role in designing and steering the evolution of the PostgreSQL infrastructure that underpins GitLab.com and thousands of self-managed enterprise deployments. You will tackle critical challenges related to uncontrolled data growth, complex upgrades and migrations, while ensuring unwavering reliability at a global scale. Your focus will be on creating database patterns and platforms that maintain GitLab's speed, resilience, and cost-effectiveness as usage expands. You will architect scalable, distributed database solutions, develop proactive health and reliability frameworks, and advocate for the adoption of modern database technologies and data stores that enhance both product capabilities and production stability. Engaging hands-on within the codebase and collaborating closely with product and infrastructure teams, you will translate long-term database strategies into incremental, customer-visible enhancements, transition incident response from reactive to proactive, and help define GitLab's forward-looking data architecture, encompassing sharding and multi-database support.
Canonical
Canonical, a trailblazer in open-source software and operating systems, is on the lookout for a dedicated Lead Data Governance Engineer. In this role, you will spearhead the development and enforcement of data governance policies, standards, and processes, ensuring compliance with internal regulations and external frameworks like GDPR and ISO. You will harness your expertise to create Python-based tools that streamline operations for our internal data mesh solution, focusing on data labeling and quality metrics while adhering to best practices in data security and access management. Join our dynamic Data Governance team in the Commercial Systems unit, where you will facilitate secure access to rich datasets from diverse internal and external sources, using cutting-edge open-source tools such as Trino and Ranger. This remote position is based in the EMEA region, offering a unique opportunity to work within a globally distributed team.
At Canonical, we are on a mission to revolutionize enterprise data solutions by creating a state-of-the-art automation suite that seamlessly integrates multi-cloud and on-premise environments. Our Data Platform Team is at the forefront of this initiative, driving innovation through the development of a diverse range of data storage solutions and technologies that encompass big data, NoSQL, caching layers, and advanced analytics, alongside structured SQL engines.We tackle the intriguing challenges associated with fault-tolerant, mission-critical distributed systems, striving to deliver the world's leading automation solutions for data platforms.Our team is expanding, with openings ranging from junior to senior levels. We will work closely with you to find a fitting role that aligns with your skills and passions. Engineers who excel at Canonical are those who appreciate the dynamics of the open-source community while understanding the demands of large-scale, innovative organizations.Location: This position is open to candidates in the European, Middle Eastern, and African time zones.
About TrafileaTrafilea is a pioneering Consumer Tech Platform dedicated to Transformative Brand Growth. We are building an AI Growth Engine that fuels the next generation of consumer brands.With over $1 billion in cumulative revenue, more than 12 million customers, and a talented team of over 500 individuals across 19 countries, we integrate technology, growth marketing, and operational excellence to scale purpose-driven, digitally native brands.As owners and operators of our own digitally native brands (not an agency), we have a significant presence in retail giants like Walmart, Nordstrom, and Amazon, alongside a robust global D2C footprint.
About HoneHone is revolutionizing healthcare with our innovative online medical clinic that focuses on enhancing longevity and empowering individuals to take charge of their health. By leveraging advanced scientific research, we help both men and women unlock their full potential. Our team is at the core of our mission, driving our success through our commitment to our brand values:Champion Patient NeedsExecute RelentlesslyCommunicate ConstructivelyCollaborate GenerouslyTurn Obstacles Into OpportunityGive With GratitudeAs a fully virtual organization since our inception, Hone continues to embrace a remote-first culture.Our Ideal CandidateWe are seeking a mission-driven Senior Data Engineer who is a proactive multi-tasker dedicated to making a meaningful impact. The ideal candidate thrives in a dynamic, fast-paced environment and is eager to tackle challenges with enthusiasm. They possess a strong collaborative spirit and excel in both independent and team-oriented tasks, fostering open communication. With a commitment to learning and development, they exhibit humble leadership while driving initiatives to help others live longer, healthier lives.The RoleAs a Senior Data Engineer at Hone, you will report directly to the Senior Director of Data, Analytics, and Machine Learning. Your primary responsibility will be to develop and maintain the pipelines and infrastructure that facilitate analytics, reporting, and machine learning projects. Additionally, you will play a key role in constructing a longevity ontology knowledge graph. This position is perfect for someone with over 5 years of experience looking to expand their influence and technical expertise within a collaborative team.Primary ResponsibilitiesYour key responsibilities will include (but are not limited to):Developing, maintaining, and optimizing reliable data pipelines using SQL, dbt, and Python.Building and managing an ontology graph database with CosmosDB and Gremlin.Leveraging agentic AI to streamline and automate pipeline development.Working alongside Analytics Engineers, Data Scientists, Analysts, and Software Engineers to transform and structure data to fulfill business objectives.
Join our rapidly expanding team at platacard as a Junior IT Recruiter specializing in the recruitment of Golang Engineers. In this dynamic role, you will be integral to the full recruitment cycle for technical positions, collaborating closely with hiring managers and seasoned recruiters to effectively scale our engineering teams. You will be part of a dedicated hiring stream, working alongside a stream lead who will guide your professional growth in IT recruitment. This position is ideal for individuals who have embarked on their IT recruiting journey and are eager to enhance their sourcing and interviewing skills while thriving in a collaborative and fast-paced environment. Key Responsibilities: Facilitate the end-to-end recruitment process for IT roles within your stream. Proactively identify candidates using LinkedIn, job boards, and direct searches. Conduct initial candidate interviews and manage the hiring process. Develop and maintain robust candidate pipelines. Collaborate with hiring managers and recruiters to influence hiring decisions. Ensure a seamless and positive candidate experience throughout the recruitment journey. Monitor recruitment metrics and maintain precise data in the ATS.
Canonical
Canonical is revolutionizing the enterprise data landscape with a robust suite of multi-cloud and on-premise data solutions. Our mission is to simplify database operations across any cloud environment or local infrastructure. The data platform team encompasses a broad array of data stores and technologies, from big data and NoSQL solutions to cache-layer capabilities and analytics, as well as established SQL engines like Postgres and MySQL. We are dedicated to delivering resilient, mission-critical distributed systems and aspire to create the world’s premier data platform. We are in search of a skilled Engineering Manager to spearhead teams concentrating on Big Data and MySQL databases. Our development is primarily done in Python, and we embrace modern operational practices for data applications at scale, utilizing Kubernetes and cloud environments.
Data Engineer Our Story Over the past several years, able has experienced tremendous growth and transformation. We began our journey in 2013 as a product and engineering hub, initially serving a portfolio of early-stage startups. This hybrid model allowed us to refine our skills and establish a strong operational and cultural foundation. Chapter 2: In 2019, we broadened our horizons, successfully expanding our partnerships and delivering high-value solutions to new collaborators. Chapter 3: Now in 2023, we enter an exciting new phase of growth, fueled by our ambition to leverage applied AI in software development. Our commitment to innovation has positioned us to deliver solutions faster than traditional methods, providing substantial value to our partners. About the Role As a Data Engineer, you will play a vital role in supporting the Director of Engineering and collaborating with the Engineering team. This position involves cross-functional partnerships with Product, Design, and Engineering teams to develop robust and scalable data solutions that address critical business needs. Day-to-Day Responsibilities Strategic Architecture Leadership: Shape the vision and roadmap for large-scale data architecture across client engagements. Establish governance, security frameworks, and ensure regulatory compliance. Lead the strategy for platform selection, integration, and scaling efforts. Guide organizations in adopting data lakehouse architectures.
About TrafileaTrafilea is a forward-thinking Tech E-commerce Group that operates several direct-to-consumer brands within the intimate apparel and beauty industries. We leverage data-driven strategies to propel our businesses forward. Beyond our diverse range of products, we cultivate an online community that champions body positivity. As a rapidly expanding global entity, Trafilea is dedicated to delivering high-quality products and services that enhance customer satisfaction and facilitate sustainable growth.Join Our Business Intelligence Team @ TrafileaAt Trafilea, we nurture a culture rooted in collaboration, innovation, and continuous learning. We are committed to investing in our talent and providing the necessary support and development opportunities for both personal and professional growth. Embrace the freedom of a remote-first work environment, collaborating with a diverse and talented team from around the globe.We are seeking a Senior Data Engineer who will play a pivotal role in constructing and maintaining data pipelines and data models for our company's data platform. This role is crucial for driving Trafilea’s growth. You should have a strong passion for data architecture and data warehousing, with a focus on creating scalable and dependable frameworks for efficient data extraction and transformation.Key Responsibilities:Analyze, design, implement, and maintain pipelines that reliably and efficiently produce business-critical data utilizing cloud technologies.Develop new ETL processes (Extract, Transform, Load) using Apache Airflow. Propose initiatives to enhance performance, scalability, reliability, and overall robustness.Collect, process, and clean data from various sources using Python & SQL.Collaborate closely with lead Architects and Developers to ensure adherence to best practices and guidelines across all projects.Effectively assess and communicate the effort required for necessary developments.Identify new data sources to enhance existing pipelines and take responsibility for building and maintaining data models for new and ongoing projects.Maintain comprehensive documentation of your work and changes to uphold data quality and governance.Provide insightful feedback and expert perspectives to support data initiatives across the organization.Enhance the quality of existing and new data processes (ETL) by incorporating statistical process control and setting up alerts for anomalies at every stage of the pipeline.Create benchmarks for execution times to measure performance effectively.
At Clear Tech, we are pioneers in harnessing the power of Data, Analytics, and Artificial Intelligence. By empowering companies globally, we transform raw data into meaningful business insights. Our team, comprised of highly skilled professionals from Latin America, follows global best practices in cloud technologies, data engineering, data science, and business intelligence. We adopt agile methodologies to execute comprehensive projects, staff augmentation models, and tailored training programs, equipping professionals to meet modern market challenges. Semi Senior Data Engineer - Azure DatabricksWe are on the lookout for a Semi Senior Data Engineer with robust expertise in Azure Databricks. This role involves designing, developing, and maintaining scalable data pipelines within a state-of-the-art Lakehouse architecture.Your responsibilities will heavily focus on data engineering within Azure, including orchestration with Azure Data Factory, storage management, and ETL development using Python and Databricks, alongside governance through Unity Catalog.We seek a hands-on individual who possesses excellent skills in Python and SQL and is comfortable working with large data volumes in distributed environments.ResponsibilitiesDesign and implement scalable ETL/ELT pipelines utilizing Azure Databricks and PySpark.Develop robust data transformation solutions using Python and SQL.Create and maintain Databricks notebooks for ingestion and transformation processes.Orchestrate data flows using Azure Data Factory.Manage governance and access control through Databricks Unity Catalog.Establish and oversee Databricks Jobs for production loads.Work with Azure storage solutions (Data Lake, Blob Storage, etc.).Optimize queries and pipelines to enhance performance and scalability.Ensure data quality, consistency, and reliability.Collaborate with analytics and business teams to enable data-driven solutions.Document technical processes, architecture, and deployments.Leverage AI tools to enhance development, debugging, and technical productivity.Main Requirements3+ years of experience in Data Engineering.Proven experience in Azure Databricks.Strong proficiency in Python and SQL.Experience in building ETL pipelines with PySpark and Delta Lake.Hands-on experience with Azure Data Factory.
Sign in to browse more jobs
Create account — see all 123,073 results
Browse all companies, explore by city & role, or SEO search pages.
