Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Experience Level
Senior
Qualifications
The ideal candidate will possess strong expertise in data engineering, specifically with experience in creating and maintaining ETL processes. A solid understanding of databases, programming languages such as Python or Java, and data warehousing solutions is essential. Familiarity with cloud platforms like AWS or Azure will be a significant advantage.
About the job
Join Skalar as a Senior Data Engineer, where you will be instrumental in shaping data solutions that drive impactful analytics. You will design, implement, and optimize data pipelines, ensuring the integrity and availability of our data. As a key member of our engineering team, you will collaborate closely with data scientists and analysts to deliver high-quality data products.
About Skalar
Skalar is a leader in data-driven solutions, committed to providing innovative technology that empowers businesses across various sectors. Our team is passionate about harnessing the power of data to unlock insights and drive strategic decision-making.
Qualifyze GmbH develops software solutions that support supply chain compliance in the Life Sciences sector. The engineering group consists of 50 professionals specializing in software and data, working together to create value throughout each project phase. This remote Senior Frontend Engineer position focuses on building scalable user interfaces and streng…
About Qualifyze,Founded in 2019, Qualifyze is at the forefront of supply chain compliance management within the Life Sciences sector, earning the trust of over 1,500 pharmaceutical and healthcare organizations globally. Our comprehensive digital solutions seamlessly connect manufacturers, suppliers, and a vast network of over 250 auditors and quality professionals.With an impressive record of more than 4,500 audits conducted across 85+ countries, we boast the largest and most accurate supplier network, complemented by robust data analytics tools. Qualifyze is your ideal partner for ensuring quality compliance and mitigating supply chain risks in the Life Sciences industry.Role OverviewWe are on the lookout for a Senior Backend Engineer to enhance our Tech department, particularly within the Engineering team. You will join a dynamic group of over 50 professionals dedicated to engineering and data. Our technology stack prominently features TypeScript with React and Node.js, alongside Java for specific projects.At Qualifyze, we believe that software engineering transcends mere coding—it’s about shaping the future. We seek engineers who recognize that every line of code contributes to a lasting legacy. As part of our team, you will focus on streamlining feedback loops, developing scalable solutions, and fostering clear communication with the business. You will advocate for clean architectures, robust testing strategies, and domain-driven design principles to deliver substantial value at every step.Main ResponsibilitiesLead backend architecture, testing, and technical decisions across teams, ensuring scalable, high-quality outcomes while adhering to best practices.Tackle and resolve intricate tasks within large-scale projects, or independently manage smaller projects.Effectively communicate and collaborate with team members, the DevOps team, product managers, and the design team. You will spearhead the technical strategy, translating product requirements into clear, functional solutions.
About Qualifyze:Founded in 2019, Qualifyze has quickly established itself as a premier provider in supply chain compliance management within the Life Sciences sector, gaining the trust of over 1,500 pharmaceutical and healthcare firms worldwide. Our sophisticated digital suite of solutions seamlessly connects manufacturers, suppliers, and a global network of more than 250 auditors and quality professionals.With an impressive portfolio that includes over 4,500 audits conducted across 85+ countries, and boasting the largest, most accurate supplier network along with advanced data analytics tools, Qualifyze is your comprehensive partner for quality compliance and supply chain risk mitigation in the Life Sciences industry.
Full-time|Remote|Germany /Spain / Romania (Remote)
TrustYou stands at the forefront of AI-driven hospitality solutions, committed to enhancing guest experiences while empowering businesses to excel. Our diverse team of over 120 talented professionals collaborates remotely from various locations worldwide, united in a mission to help companies achieve exceptional customer satisfaction.At TrustYou, our culture is dynamic, shaped by the contributions of our team members. We value open feedback and are dedicated to continuous improvement and achieving excellence in customer service.Every individual’s unique perspective enriches our collective success, fostering an environment of experimentation, learning, and growth.Our innovative products are designed to enhance customer satisfaction, increase customer lifetime value, and minimize unnecessary expenditures.Customer Experience Platform (CXP): Gain AI-powered insights that elevate guest experiences. Enhance service quality using feedback from surveys and reviews, respond to all comments with AI assistance, and enhance your brand reputation.Customer Data Platform (CDP): Transform customer data management with AI for more direct bookings. Integrate and master customer data, manage consent, and convert insights into tailored marketing strategies and personalized journeys.AI Agents: Our intelligent, always-on agents enhance productivity and reduce operational costs. Available round the clock, they offer immediate, personalized recommendations and streamline direct booking processes.Discover more about TrustYou at www.trustyou.comIf you are passionate about enhancing customer happiness and making a significant impact, you belong here. Join us in our journey to innovate and excel in the hospitality industry.Position: Senior Software Engineer - DataLocation: Germany / Spain / Romania (Remote)As a Senior Software Engineer - Data, you will play a pivotal role in the complete development and technical execution of our core data infrastructure. Your primary focus will be on crafting clean and efficient code for building high-performance, data-intensive APIs and intricate processing pipelines that serve as the backbone of our data products.This position is tailored for a technical specialist who thrives on hands-on work with distributed systems and takes full accountability for implementing and delivering robust data solutions.
Join our dynamic Platform tribe at SumUp, where your expertise will help us create an innovative self-service, AI-Ready Data Platform. Our Data Platform team plays a crucial role in supporting our ambitious ventures into Data, AI, and real-time analytics. We are committed to building robust and scalable infrastructure that empowers our global data community.Your ResponsibilitiesDesign and sustain an exceptional data infrastructure that underpins vital processes.Enhance platform self-service capabilities by developing features for ETL and orchestration.Streamline processes to maximize compute resource efficiency and minimize costs.Implement automation solutions to ensure our infrastructure's availability around the clock.Participate in the design and rollout of new services within our platform ecosystem.
Jobgether is looking for a Senior Data Engineer based in Germany. This position centers on building and refining data systems that support business intelligence and analytics. Role overview The Senior Data Engineer will design and implement data solutions that help the company gain actionable insights. Collaboration with teams across the organization is part of the day-to-day, with a focus on improving data architecture and supporting analytics functions. What you will do Develop and maintain scalable data pipelines Optimize data flow across systems Ensure data integrity for reliable analytics Work closely with colleagues from different departments to enhance data-driven decision making Impact Your work will help Jobgether use data more effectively to guide strategy and improve how the business operates.
We are seeking a highly skilled Senior Data Engineer to join our dynamic team at repriskag in Berlin. In this pivotal role, you will be responsible for designing, building, and maintaining scalable data pipelines that support our data-driven decision-making processes. You will work closely with data scientists, analysts, and other stakeholders to ensure data quality and accessibility.
Join Skalar as a Senior Data Engineer, where you will be instrumental in shaping data solutions that drive impactful analytics. You will design, implement, and optimize data pipelines, ensuring the integrity and availability of our data. As a key member of our engineering team, you will collaborate closely with data scientists and analysts to deliver high-quality data products.
Data Engineer (Python) Company Overview Orcrist Technologies is at the forefront of innovation with the Orcrist Intelligence Platform (OIP), a cutting-edge data intelligence system built on Kubernetes. Our platform is available as a SaaS solution or can be deployed on-premises, including air-gapped setups. We manage both streaming and batch data pipelines that empower search functionalities, machine learning enrichment, and investigative workflows for our mission-critical clientele. Role Summary As a Data Engineer, you will play a pivotal role in quickly validating new data initiatives from inception to deployment, ensuring they are adoptable and scalable. In this innovative environment, you will prototype effective connectors and pipelines, generate performance assessments, and create handoff packages for productization by our Foundation or delivery team. Key Responsibilities Prototype ingestion and connector patterns (batch and streaming) utilizing NiFi, Kafka, Kafka Connect/Streams, and Change Data Capture approaches. Design schemas and data models that are both prototype-grade and easily adoptable, ensuring semantic clarity and a disciplined approach to evolution. Develop incremental lakehouse datasets using Hudi, Iceberg, and Delta patterns, producing outputs for real-world latency and throughput evaluations. Implement data quality and provenance considerations early in the process, incorporating checks, metadata hooks, and operational basics. Containerize and deploy prototypes on Kubernetes, providing minimal runbooks and configurations for seamless adoption. Create adoption artifacts including schemas, reference implementations, technical design notes, and a backlog for integration. Qualifications Minimum of 3 years of experience in data engineering with a proven track record of delivering real-world data pipelines beyond ad-hoc scripts. Proficient in Python and SQL, skilled in building transformations, validation tools, and pipeline integration code. Solid understanding of streaming and Change Data Capture fundamentals, along with experience in the Kafka ecosystem. Familiar with lakehouse architectures and query layers (e.g., Hudi, Iceberg, Delta, Trino, Hive, Postgres) and their role in making datasets accessible. Comfortable working in Kubernetes and container environments and adept at documenting technical decisions clearly. Must be eligible to work in Germany; EU/NATO citizenship is preferred, and export-control screening will apply. Preferred Qualifications Experience with data quality tools such as Great Expectations or metadata/lineage platforms (OpenMetadata, DataHub, Atlas). Experience with on-premises or air-gapped deployments and awareness of governance and policy for regulated environments. Proficiency in German (B1+) and familiarity with OSINT, GEOINT, or multi-INT data structures. What We Offer A modern data stack with real-world constraints: Kafka, NiFi, and more.
LITIT is a joint venture between NTT DATA and Reiz Tech, focused on delivering IT solutions in the DACH region. The company brings together German precision, Japanese work ethics, and Lithuanian talent to provide IT services and support. This remote Data Analytics Engineer position centers on building and improving the IoT Insurance Data Platform (IDP) using AWS. The role involves designing, implementing, and maintaining scalable data pipelines and shared platform services. The work supports analytics, data products, and machine learning applications for industrial IoT and insurance clients. Key responsibilities Architect, deploy, and manage cloud-native data pipelines on AWS. Develop and maintain scalable ETL workflows, data lakes, and data mesh components. Create and optimize PySpark jobs for processing large-scale and time-series data. Manage schemas, tables, and metadata using AWS Glue Data Catalog and Lake Formation. Collaborate with data platform, analytics, and product teams. Desired qualifications Extensive hands-on experience with AWS services, especially Lambda, Glue, and S3; familiarity with Athena, Lake Formation, Step Functions, and DynamoDB is a plus. Strong background in data engineering, including designing and executing ETL/ELT pipelines for large-scale or streaming data. Proficiency in Spark or PySpark for distributed data processing. Understanding of modern data formats such as Apache Iceberg and Parquet. Proven ability to deliver production-grade, enterprise-level data solutions. Experience with API integration, including AWS API Gateway and data exchange APIs. Familiarity with CI/CD pipelines and automated deployment processes. Experience working in cross-functional Scrum teams and an agile mindset. Willingness to travel domestically or internationally as project or client needs arise, sometimes on short notice. Compensation and development Salary range: €4700 - €5700 gross per month. Opportunities for learning and continuous professional growth.
At Pixaera, we are a team of enthusiastic innovators committed to creating solutions that empower individuals. Currently, we are harnessing the power of AI and gamification to develop a specialized platform aimed at enhancing the safety readiness of frontline workers across various high-risk industries. If you are a proactive individual who flourishes in a dynamic, mission-driven setting, we invite you to join our team!Who We Are Seeking?Pixaera is on the lookout for a Senior Data Engineer to spearhead the advancement of our core data platform. Your responsibilities will encompass everything from warehouse design to production pipelines that drive analytics, product intelligence, and personalized learning experiences.As a Senior Data Engineer, you will design and manage scalable data systems utilizing a modern stack (Snowflake, DBT, AWS). You will collaborate closely with backend engineers to ensure that our data is clean, reliable, and effectively exposed at its source, rather than being patched up later in the process. A vital focus will be on refactoring and fortifying critical data flows to minimize technical debt and enhance system integrity.This role is perfect for a seasoned engineer who possesses strong Python and SQL skills, embraces product-oriented thinking, and enjoys constructing robust systems within fast-paced startup environments.Key Responsibilities:Data Platform & PipelinesTake ownership of the data warehouse (currently Snowflake-based) and oversee modeling and transformation layers (DBT).Refactor and strengthen the admin API data flows to address significant technical debt.Construct and maintain production pipelines on AWS using Python and SQL, ensuring proper observability and testing protocols.Collaborate with backend engineers to present clean and reliable data at the source, rather than retrofitting it downstream.Bridge to Product & EngineeringAct as a liaison between Engineering and Product teams to ensure the right data supports customer outcomes.Contribute to backend feature work where data and product intersect, such as telemetry, instrumentation, and data-aware features.Become an integral part of the engineering team, specializing in data rather than functioning as a separate analytics entity.Clarification on Role ScopeThis is not a dashboard-only position; internal dashboards are handled separately as scoped contractor projects.This role does not focus exclusively on analytics; analytical depth is beneficial but not the primary focus.
Role Overview Soprasteria is looking for a Senior Data Engineer (m/w/d) to help shape data-driven solutions for public sector clients across Germany. This role focuses on designing, building, and maintaining data pipelines that turn raw information into clear insights for clients. What You Will Do Design and develop data pipelines that support client needs Maintain and optimize existing data infrastructure Apply expertise in data architecture and engineering to advance project goals Location This position is open to candidates nationwide (bundesweit).
Why Join NebiusNebius is at the forefront of cloud computing, catering to the global AI economy. We empower our clients with innovative tools and resources to tackle real-world problems and revolutionize industries, all while minimizing infrastructure expenses and the necessity for extensive in-house AI/ML teams. Our workforce operates at the cutting edge of AI cloud infrastructure, collaborating with some of the most experienced and visionary leaders and engineers in the industry.Your Work EnvironmentWith our headquarters in Amsterdam and listed on Nasdaq, Nebius boasts a global presence with research and development hubs throughout Europe, North America, and Israel. Our team, comprising over 1,400 employees, includes more than 400 highly skilled engineers with extensive knowledge in both hardware and software engineering, supported by an in-house AI R&D division.The RoleThe Data Engineering team is tasked with creating and sustaining a robust data infrastructure that drives analytics and business intelligence across Nebius. Our responsibilities include designing and implementing scalable data pipelines, optimizing data storage and processing, and facilitating data-driven decision-making across the organization. This position involves close collaboration with product teams and business stakeholders to ensure alignment with corporate objectives. As a Data Engineer, you will be responsible for designing, building, and maintaining our data infrastructure and pipelines. Your work will include processing large-scale datasets, optimizing data workflows, and enabling analytics capabilities that support our rapidly expanding cloud platform. Your Responsibilities:Design, develop, and maintain scalable data pipelines.Build and optimize data infrastructure.Implement data quality monitoring and validation frameworks.Enhance data storage, processing, and query performance for large-scale datasets.
Full-time|Remote|Germany, Remote; Netherlands, Remote; Spain, Remote; United Kingdom, Remote
About Dataiku Dataiku provides a unified platform for building, deploying, and managing AI and analytics across the enterprise. The platform connects teams and tools, supporting transparency, collaboration, and centralized governance. Organizations use Dataiku to run analytics, machine learning, and AI projects across multiple vendors and cloud environments. Many leading global companies rely on Dataiku to operationalize AI and drive measurable business value. Learn more through the Dataiku blog, LinkedIn, X, and YouTube. Role Overview: Data Engineer I (Remote) The Enterprise Data and Analytics (EDA) team at Dataiku is hiring a Data Engineer I. This internal, non-client facing position supports the data platform that enables analytics, embedded analytics teams, Generative AI engineering, and self-service users across the company. The role is fully remote and open to candidates based in Germany, Netherlands, Spain, or the United Kingdom. What You Will Do Deliver and maintain data pipelines that power analytics and insights for teams throughout Dataiku. Split time equally between Data Operations (support, troubleshooting) and new development. Work daily with the data platform, primarily using Snowflake, Dataiku, and GitHub. Develop solutions using Python and SQL. Contribute to DataOps processes within GitHub Actions and Dataiku. Support data platform processes within Snowflake and Dataiku. What We Look For Technical skills in Python, SQL, and data platform tools (Snowflake, Dataiku, GitHub). Strong analytical thinking and problem-solving abilities. Excellent verbal and written communication skills. Curiosity and a commitment to continuous learning. Ability to work collaboratively with engineers from different teams. Positive attitude and focus on shared goals. Additional Notes This is an internal and non-client facing role.
About ClickHouseListed among the top innovators on the 2025 Forbes Cloud 100 list, ClickHouse is a leading, rapidly expanding private cloud company. With over 3,000 customers and an astonishing annual recurring revenue growth exceeding 250%, ClickHouse is at the forefront of real-time analytics, data warehousing, observability, and AI workloads.Our momentum was recently reinforced by a successful $400 million Series D funding round. In the past three months, renowned clients such as Capital One, Lovable, Decagon, Polymarket, and Airwallex have adopted or expanded their use of our platform. These new clients join a prestigious roster of AI innovators and global brands including Meta, Cursor, Sony, and Tesla.Join us in our mission to revolutionize the way companies leverage data!The Connectors team serves as the vital link between ClickHouse and the vast data ecosystem. We develop and maintain integrations that make ClickHouse accessible to millions of developers, data practitioners, and AI agents worldwide—ranging from high-level data visualization plugins (like Tableau, PowerBI, Superset, Metabase) to connectors for data frameworks (Apache Spark, Flink, Kafka Connect, Fivetran), orchestration platforms, and AI tools.Our work is pivotal in shaping how organizations process massive datasets—enabling real-time analytics platforms to ingest millions of events per second, observability systems to monitor global infrastructure, and increasingly, AI-driven data applications that redefine team collaboration with data. We work closely with the open-source community, our internal teams, and enterprise users to ensure that ClickHouse integrations lead the way in performance, reliability, and developer experience.About the RoleAs a Senior Software Engineer specializing in Python and the Data Ecosystem, you will be a key contributor, responsible for owning and advancing essential components of ClickHouse's data engineering ecosystem. This role exists at the crossroads of high-performance database engineering and enhancing developer experience. You will create tools that empower Data Engineers and Data Scientists to fully leverage ClickHouse's speed and scale within the frameworks they are already familiar with.We are seeking an individual who has direct experience as a Data Engineer or Data Scientist. The landscape for data practitioners is evolving rapidly; databases have progressed beyond mere query targets—they are now integral components of AI-powered workflows, serving as vector stores for RAG pipelines, backends for LLM-powered agents, and real-time feature stores for ML inference. You comprehend these workflows not from an outsider's perspective, but from personal experience within them. Your role is not just to build integrations; you will contribute product-level insights that enhance user experience.
Full-time|On-site|Stuttgart, Baden-Württemberg, Germany
At Zoi, we revolutionize manufacturing and retail through AI-driven strategies that yield tangible business outcomes. Our approach goes beyond mere consultation; we actively implement, operate, and scale solutions that transform AI integration into a competitive edge.We are excited to expand our teams across Europe, particularly in Stuttgart and Berlin, with opportunities for travel between these locations.ROLE OVERVIEWTake charge: You will conceptualize and lead complex data projects, including the design of enterprise-scale big data platforms.Collaboration is key: You will thrive in interdisciplinary teams, bringing diverse perspectives to the table.Delivering value: As a seasoned expert, you will craft tailored solutions that address client needs while providing guidance throughout the project lifecycle.Utilizing your expertise: Your proficiency in building code-driven data transformation and cleansing pipelines (e.g., Airflow, Spotify Luigi) will be invaluable.WHO YOU AREYou hold a degree in (business) mathematics, computer science, or a related field.You bring at least 5 years of relevant experience in data engineering, especially with databases and large datasets.Your strong command of programming languages such as Python or R is essential.Your solid understanding of cloud computing will propel our initiatives forward.You possess extensive experience in cloud data processing, including object storage, data warehouses, and serverless data pipelines.You have excellent communication skills, able to convey complex technical concepts to both technical and non-technical audiences.Your proficiency in written and spoken English is exceptional.If you enjoy working alongside brilliant minds, Zoi is the place for you. Join our community of tech enthusiasts and unlock your full potential as you contribute to the sustainable digital transformation of our enterprise clients.
Role Overview Sopra Steria is hiring a Senior Big Data Engineer (m/w/d) to join its team across Germany. This role focuses on designing, building, and optimizing data processing systems that support clients in multiple industries. The position is open nationwide (bundesweit). What You Will Do Design, implement, and refine large-scale data processing systems for reliability and performance Work closely with stakeholders to gather data requirements and translate them into clear technical specifications Support the company’s data strategy by developing solutions using Hadoop, Spark, and cloud-based technologies Ensure high availability and efficiency of data platforms Key Technologies Hadoop Spark Cloud services (specific platforms not specified)
Intrinsic Robotics, a pioneering venture within Alphabet, is dedicated to revolutionizing the landscape of industrial robotics. Our team is driven by the belief that breakthroughs in artificial intelligence, perception, and simulation will reshape the capabilities of industrial robotics in the near future, with software and data at the heart of this transformation. Our mission is to make intelligent industrial robotics accessible and usable for a multitude of businesses, entrepreneurs, and developers. We are a vibrant group of engineers, roboticists, designers, and technologists, all committed to unleashing the creative and economic potential of industrial robotics.Role OverviewIn your capacity as a Senior Research Engineer specializing in Data Engine, you will spearhead the design and implementation of systems that facilitate the training, deployment, and utilization of large-scale AI models for robotics. Your leadership will shape the technical direction, mentor fellow engineers, and ensure the establishment of robust, scalable data pipelines and machine learning infrastructure essential for enhancing the capabilities of robots in real-world settings. Your contributions will empower both internal teams and external partners by delivering accessible tools and infrastructure for integrating advanced AI methodologies into robotic applications.Key ResponsibilitiesLead the architecture and execution of infrastructure for training and deploying large-scale robotic foundation models.Manage the development and upkeep of data pipelines for the collection, processing, and integration of real-world robot data into machine learning workflows.Enable machine learning researchers and engineers with user-friendly tools and infrastructure for leveraging AI techniques in robotics.Collaborate with product and engineering teams to specify infrastructure needs and deliver effective solutions for both partners and customers.
At GALVANY, we are dedicated to transforming climate-neutral living into a reality for all. Our core focus lies in implementing effective solutions such as heat pumps, battery storage, and smart metering, making them accessible, reliable, and affordable. GALVANY-Tech is pioneering the energy transition through innovative software solutions. We are developing an AI-driven Operating System that facilitates the sales, planning, installation, and operation of heat pumps and integrated energy systems, catering to everything from single-family homes to multi-unit buildings, and operating within our Energy Community as a Virtual Power Plant.As a successful green-tech startup, we believe that sustainable impact and long-term growth can only be achieved through a robust business model. We prioritize customer value, transparency, and accountability, standing firmly for top-quality heating solutions and fostering an environment where individuals take charge and create lasting changes.The RoleAs a Senior Data Engineer at GALVANY, you will design and maintain the data infrastructure that supports our AI-based Operating System. You will process real-time energy data from heat pumps and integrated systems, construct data pipelines that empower AI models and analytics, and guarantee the integrity of data across our entire ecosystem—from individual residences to our Virtual Power Plant.
Team OverviewThe Card Not Present (CNP) Protect team is a dynamic, cross-functional unit within SumUp's Risk & Compliance tribe. We are dedicated to safeguarding millions of merchants by mitigating fraud in our card-not-present offerings. Our team excels in developing robust machine learning systems and data pipelines that facilitate real-time automated decision-making. We are currently enhancing our infrastructure to incorporate foundation model embeddings. This role offers significant ownership and impact, directly influencing the efficacy of our fraud detection mechanisms.Key ResponsibilitiesDesign, implement, and sustain high-quality Python-based data pipelines that support machine learning workflows, featuring real-time and near-real-time data processing.Assume complete responsibility for data quality and reliability, executing validations, automated testing, monitoring, and alerting with established SLAs.Enhance our data foundation by meticulously documenting architecture, data lineage, dataset definitions, and managing dependencies.Lead the governance of our Feature Store, refining usability and standardizing setup to enable data scientists to operate with speed and confidence.Collaborate closely with the Risk Platform, ML Data Platform, data scientists, software engineers, and analysts to ensure safe deployment of changes to production.