Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Experience Level
Experience
About the job
Discover the Opportunity at The Dot Collective
At The Dot Collective, we are a pioneering consultancy operating across the UK and EU, dedicated to engineering excellence and empowering individuals to create meaningful impacts.
We embrace modern technology stacks and apply agile scrum methodologies to all our projects.
Who You Are
If you have a passion for data and its transformative potential, and you're eager to make significant contributions in a short timeframe, we could be the perfect fit for you.
Discover the Opportunity at The Dot CollectiveAt The Dot Collective, we are a pioneering consultancy operating across the UK and EU, dedicated to engineering excellence and empowering individuals to create meaningful impacts.We embrace modern technology stacks and apply agile scrum methodologies to all our projects.Who You AreIf you have a passion for data a…
At Samba TV, we are pioneers in tracking streaming and broadcast video globally through our innovative data and technology solutions. Our mission is to revolutionize the viewing experience for everyone. We empower media companies to connect with their audiences for new shows and movies while allowing advertisers to engage viewers and measure their reach across all devices. With a unique cultural perspective shaped by our global data and AI-driven insights, we are committed to transforming the media landscape.We are on the lookout for a skilled Data Engineer to enhance our Internal Data and Performance (IDP) team within the Internal Measurement department. This team serves as the definitive source of truth for internal data health and operational metrics, overseeing systems that monitor the company's television footprint, partner payments, and data quality. You will collaborate with various technical teams to enhance visibility and promote data-driven decision-making throughout the organization.
Join Our Global Team!The Codest is an innovative international tech software company with development hubs in Poland, dedicated to delivering top-tier IT solutions and projects globally. We embrace a 'Customers and People First' philosophy, ensuring we prioritize our clients' needs while fostering a collaborative atmosphere for our team members, empowering us to create outstanding products and services.Our expertise encompasses web development, cloud engineering, DevOps, and quality assurance. After successfully launching our own product, Yieldbird—recognized as a laureate of the esteemed Top25 Deloitte awards—we have committed ourselves to assisting tech companies in developing impactful products and scaling their IT teams through enhanced delivery performance. Our extensive experience in product development challenges has positioned us as experts in crafting digital solutions and optimizing IT team capabilities.Our journey continues, and we are eager to grow further. If you are goal-oriented and seeking new challenges, we invite you to join our dynamic team! You will find an enriching and collaborative environment that supports your growth at every stage.We are currently seeking a:SENIOR DATA ENGINEERIn this role, you will play a key part in developing a banking application for one of the leading financial institutions in Japan. This platform includes banking modules and data management capabilities that are customer-facing. You will be joining our Data Flow Team, consisting of 20 members, dedicated to integrating internal systems, vendor-hosted solutions, and third-party systems, managing data flows triggered by specific rules or events at varying frequencies. Key Responsibilities:Design and implement data processing pipelines tailored to project requirements, incorporating steps for data transformation, validation, and mapping.Develop essential components for connecting with various data sources and destinations, including APIs, SQL databases, S3 buckets, and SFTP servers.Update and modify existing data flows within the ETL tool as necessary.Conduct thorough testing and validation to guarantee the accuracy of data transformations, verifications, and final outputs.Create and execute unit and regression tests.Provide post-deployment support and troubleshoot any issues that arise.
At Samba TV, we are redefining the viewing experience by harnessing the power of our proprietary data and cutting-edge technology to track streaming and broadcast video globally. Our mission is to empower media companies to connect with audiences and advertisers to engage viewers, delivering insights that transform how we experience entertainment.We invite a talented Data Engineer to join our dynamic Data Technology team in Warsaw. You will play a crucial role in building and maintaining our data platform that serves the entire organization — supporting everything from data ingestion and analytics to comprehensive reporting. You will work with cutting-edge technologies like AWS, Databricks, BigQuery, and Snowflake to enhance our data infrastructure.As a self-sufficient contributor, you will take ownership of well-defined pipeline components and features, collaborate effectively with your teammates and cross-functional stakeholders, and navigate the complete data lifecycle. We are looking for candidates with 2–4 years of hands-on experience who are proficient in writing production-quality code and are eager to grow their technical expertise.
Join our innovative team at Inetum Polska as a Data Engineer, where you will utilize your data engineering expertise in a fast-paced environment. Your role will be pivotal in ensuring smooth data migration and optimization for cutting-edge AI and ML projects. Don't miss out on the opportunity to contribute to our groundbreaking initiatives!Key ResponsibilitiesData Pipeline Development:Craft, develop, and implement Python-based ETL/ELT pipelines to facilitate data migration from on-premises MS SQL Server to our Databricks instance,Ensure effective ingestion of historical parquet datasets into Databricks.Data Quality & Validation:Establish validation, reconciliation, and quality assurance protocols to guarantee the accuracy and completeness of migrated data,Manage schema mapping, field transformations, and metadata enrichment to standardize datasets,Integrate data governance, quality assurance, and compliance into all migration processes.Performance Optimization:Optimize pipelines for enhanced speed and efficiency, leveraging Databricks capabilities, including Delta Lake when applicable,Oversee resource utilization and scheduling for large dataset transfers.Collaboration:Coordinate closely with AI engineers, data scientists, and business stakeholders to outline data access patterns needed for upcoming AI POCs,Work alongside infrastructure teams to ensure secure connections between legacy systems and Databricks.Documentation & Governance:Maintain comprehensive technical documentation for all data pipelines,Adhere to best practices for data governance, compliance, and security throughout the migration process.
Join inetum2 as a Technical Leader in Data Engineering and play a pivotal role in driving innovative data solutions. As a key member of our team, you will lead projects that harness the power of data to deliver exceptional results for our clients. Your expertise in data engineering and leadership skills will empower you to mentor junior engineers and shape the future of our data initiatives.
About the Role nix is seeking a Data Quality Automation Engineer to join a team supporting a global client in the insurance and automotive sectors. The client operates across ten countries, serving over 1,200 organizations and managing millions of claims each year. This role contributes to a major transformation program, with a focus on cloud-based software and modern design patterns. What You Will Do Define and maintain data quality rules at all stages: ingestion, transformation, and reporting. Validate data in Databricks pipelines. Monitor and test Databricks transformations using PySpark and SQL to confirm data accuracy and completeness. Check that Databricks and Power BI reports reflect accurate, reconciled data. Set up data validation checks for schema, nulls, duplicates, ranges, and referential integrity. Identify, document, and analyze data quality issues, including root causes. Work closely with data engineers and analysts to resolve issues. Develop automated systems for data quality monitoring and alerts. Requirements At least 4-5 years of experience in data analysis, quality assurance, data governance, or a related area. Strong knowledge of Databricks or Spark, including SQL and PySpark. Experience with ETL/ELT pipelines and data transformation tools (such as dbt). Background in validating BI or reporting outputs, with Power BI preferred. Advanced SQL skills for data validation and reconciliation. Familiarity with data quality frameworks or tools (Great Expectations is a plus). Bonus Skills Experience with AWS data stack. Knowledge of data governance or data catalog tools. Exposure to CI/CD practices for data pipelines. Understanding of data lineage and observability tools. What Success Looks Like Fewer data defects in pipelines and reports. Automated data quality checks in place. Transparent tracking and visibility of data issues. Location This position is based in Poland.
Join our dynamic team at inetum2 as a Junior Data Engineer! In this pivotal role, you will be tasked with monitoring production data pipelines, troubleshooting incidents, enhancing system stability, and ensuring seamless daily data operations. If you are eager to grow your skills in data engineering and contribute to impactful projects, we want to hear from you!
Join our dynamic team at spoton as a Software Engineer specializing in Data and Reporting. In this role, you will be pivotal in designing, developing, and maintaining software solutions that enhance our data processing and reporting capabilities. You will collaborate with cross-functional teams to create robust data pipelines and reporting tools that drive business insights.
Role Overview Miratech is hiring a Data Engineer specializing in Azure Databricks for its Warsaw office. This position focuses on building and maintaining data pipelines and architectures that support reliable, high-quality data. Main Responsibilities Design and develop data pipelines using Azure Databricks Maintain and optimize data architectures to ensure availability and reliability Work with cross-functional teams to turn raw data into insights for decision-making
Join Inetum Polska as a Data Engineer, where you will play a pivotal role in the development and optimization of our data infrastructure. Your responsibilities will include:Designing and maintaining efficient processes to aggregate data from diverse sources into our Data Lake.Creating, developing, and refining complex data pipelines to guarantee a reliable flow of information.Establishing frameworks that support the development of data pipelines.Implementing thorough testing frameworks for data pipelines to ensure data integrity and quality.Collaborating with analysts and data scientists to deliver superior quality data solutions.Overseeing data management practices, ensuring security, compliance, and best practices in governance.Exploring and adopting new technologies to enhance data pipeline performance.Integrating and leveraging data from various source systems, including Kafka, MQ, SFTP, databases, APIs, and file shares.
At Lingarogroup, we champion growth through diversity, equity, and inclusion. Our ethical business practices ensure that we uphold the highest standards of equality, fostering a safe and respectful workplace for all. We firmly believe that a diverse workforce drives both personal fulfillment and business success. Our commitment to creating an inclusive community empowers every individual to thrive, irrespective of their background or identity.
Role Overview Devoteam is hiring a Senior / Lead Data Engineer for its 24x7 Data & AI Factory in Kraków. This position takes ownership of key data engineering projects, guiding efforts that support client growth and innovation. What You Will Do Lead data engineering initiatives across multiple projects Work with advanced technologies to design and implement scalable data solutions Collaborate with teams from different disciplines to optimize data pipelines Enhance AI capabilities by ensuring data integrity and reliability Location This role is based in Kraków.
Join our dynamic team at Miratech as a Middle Data Engineer specializing in Azure Databricks. In this role, you will design, develop, and optimize data pipelines and workflows, utilizing your expertise in data engineering to support our innovative projects. You will work collaboratively with cross-functional teams to ensure data integrity and availability, enabling data-driven decision-making across the organization.
Senior Software Engineer, Data Foundations ABOUT THE ROLE At Peloton, we view Data as a Product—a vital asset that drives every member interaction and influences business decisions. Our Datastores team is committed to offering a dependable, secure, and high-performing data persistence layer for application services throughout the organization. Our core principles include: 1. Safeguard the data, 2. Optimize for scalability, speed, and reliability, and 3. Minimize manual effort. We are seeking a Senior Software Engineer with extensive experience in constructing and managing data-intensive systems to join our Datastores team. This position is perfect for a backend engineer who has developed and maintained production systems heavily reliant on databases, caching layers, and data pipelines, and who is eager to advance their knowledge in scalable, cloud-native data infrastructures. You will collaborate at the intersection of application engineering and data platform reliability, working alongside service teams to enhance the storage, access, scalability, and observability of data across Peloton’s ecosystem. YOUR DAILY IMPACT AT PELOTON Your responsibilities will include: Designing, building, and maintaining backend systems that depend on scalable and highly available data persistence layers. Collaborating with service teams to refine database design, enhance query performance, and optimize data modeling. Contributing to automation efforts surrounding infrastructure provisioning using tools like Terraform and Backstage, while improving developer experience by creating self-service tools for databases and caching systems. Focusing on observability, performance insights, and autoscaling strategies for production datastores. Participating in architectural discussions regarding multi-regional data persistence and global scalability. This role merges hands-on backend engineering with significant exposure to cloud data systems and platform reliability challenges. WHAT YOU BRING We are looking for proficient backend engineers who are data-centric—those who have built systems where data performance, reliability, and modeling are crucial. 5+ years of software engineering experience in developing production backend systems. Strong familiarity with relational databases such as PostgreSQL or MySQL, including schema design, indexing, and query optimization. Experience with NoSQL datastores like DynamoDB, Redis, Elasticsearch, or Memcache. Exposure to data pipelines, event-driven architectures, and cloud technologies.
About BoxBox (NYSE: BOX) is a pioneering leader in Intelligent Content Management, enabling organizations to enhance collaboration, manage content lifecycles, secure vital information, and revolutionize business workflows using enterprise AI. Since our establishment in 2005, we have simplified work for prominent global organizations such as JLL, Morgan Stanley, and Nationwide. Our headquarters is located in Redwood City, California, with offices spread across the United States, Europe, and Asia.By becoming a part of Box, you will play a pivotal role in advancing our platform. Content is the core of our operations, driving the flow of billions of files and information across teams, departments, and crucial business processes every day—contracts, invoices, employee records, financials, product specifications, marketing assets, and more. Our mission is to infuse intelligence into the realm of content management, empowering our customers to fundamentally transform workflows across their organizations. With the integration of AI and enterprise content, you will be at the forefront of this significant shift in how the world collaborates. Your RoleAs a Data Engineer III at Box, you will be instrumental in expanding our Data Engineering initiatives. You will contribute to the development of the data platform engineering features and capabilities of our cloud cost management platform.In this role, you will collaborate with a talented team to build data pipelines, support product and analytics team members, data analysts, and data scientists on various data initiatives while ensuring that optimal data delivery architecture is maintained across ongoing projects. Your ResponsibilitiesCollaborate with a high-performing team of data engineers and analysts to pinpoint business opportunities and design scalable data solutions.Develop and take ownership of data pipelines that clean, transform, and aggregate data from diverse sources.Create and sustain optimal data pipeline architecture.Assemble complex data sets that satisfy both functional and non-functional business requirements.Identify, design, and execute internal process enhancements, including automating manual processes and optimizing data delivery.
Join our innovative team as a Senior Data Engineer and play a pivotal role in enhancing our product platform. We are looking for an experienced professional with a strong background in data engineering to design, implement, and maintain robust data solutions.Main Responsibilities:Architect and develop data solutions that elevate our product platform.Define and refine data models tailored for analytical, operational, and predictive applications.Engage in critical data architecture and technical infrastructure decisions.Guarantee high quality, reliability, and maintainability through best practices in CI/CD and automation.Work closely with product owners, developers, and stakeholders to convert business requirements into effective technical solutions.Technologies Utilized: Azure DevOps, Kubernetes, SQL, ETL.
Join Telemedi as an AI Data Engineer!Telemedi is a pioneering company in the healthcare sector, committed to employing programmers, doctors, and experts across various fields to develop cutting-edge solutions that enhance patient care. Our mission is to leverage technology to provide everyone with convenient and immediate access to medical services.In the role of AI Data Engineer, you will collaborate with us to innovate within the telemedicine and insurance industries.Your Responsibilities:Design and implement autonomous workflows that transform raw data into self-validating reports.Map business processes and determine which aspects can be automated using AI.Build and maintain the analytics layer of our telemedicine platform.Replace manual reporting systems in Excel with intelligent, repeatable pipelines.Construct architectures for data validation systems and monitor the quality of AI outputs.The first three responsibilities will constitute 80% of your work time.
Join Talan as a Tech Lead in Data Engineering, where you will spearhead innovative data solutions and lead a dynamic team of engineers. This role is pivotal in shaping our data architecture and driving impactful projects that enhance our clients' data capabilities. You will collaborate closely with cross-functional teams, ensuring the effective implementation of data strategies and technologies.
Join the innovative team at Inetum as a talented Data Engineer! We are seeking individuals with a strong background in data engineering to contribute to our exciting Big Data projects. Preferably, you will have hands-on experience with Databricks and the Spark framework.
Discover the Opportunity at The Dot CollectiveAt The Dot Collective, we are a pioneering consultancy operating across the UK and EU, dedicated to engineering excellence and empowering individuals to create meaningful impacts.We embrace modern technology stacks and apply agile scrum methodologies to all our projects.Who You AreIf you have a passion for data a…
At Samba TV, we are pioneers in tracking streaming and broadcast video globally through our innovative data and technology solutions. Our mission is to revolutionize the viewing experience for everyone. We empower media companies to connect with their audiences for new shows and movies while allowing advertisers to engage viewers and measure their reach across all devices. With a unique cultural perspective shaped by our global data and AI-driven insights, we are committed to transforming the media landscape.We are on the lookout for a skilled Data Engineer to enhance our Internal Data and Performance (IDP) team within the Internal Measurement department. This team serves as the definitive source of truth for internal data health and operational metrics, overseeing systems that monitor the company's television footprint, partner payments, and data quality. You will collaborate with various technical teams to enhance visibility and promote data-driven decision-making throughout the organization.
Join Our Global Team!The Codest is an innovative international tech software company with development hubs in Poland, dedicated to delivering top-tier IT solutions and projects globally. We embrace a 'Customers and People First' philosophy, ensuring we prioritize our clients' needs while fostering a collaborative atmosphere for our team members, empowering us to create outstanding products and services.Our expertise encompasses web development, cloud engineering, DevOps, and quality assurance. After successfully launching our own product, Yieldbird—recognized as a laureate of the esteemed Top25 Deloitte awards—we have committed ourselves to assisting tech companies in developing impactful products and scaling their IT teams through enhanced delivery performance. Our extensive experience in product development challenges has positioned us as experts in crafting digital solutions and optimizing IT team capabilities.Our journey continues, and we are eager to grow further. If you are goal-oriented and seeking new challenges, we invite you to join our dynamic team! You will find an enriching and collaborative environment that supports your growth at every stage.We are currently seeking a:SENIOR DATA ENGINEERIn this role, you will play a key part in developing a banking application for one of the leading financial institutions in Japan. This platform includes banking modules and data management capabilities that are customer-facing. You will be joining our Data Flow Team, consisting of 20 members, dedicated to integrating internal systems, vendor-hosted solutions, and third-party systems, managing data flows triggered by specific rules or events at varying frequencies. Key Responsibilities:Design and implement data processing pipelines tailored to project requirements, incorporating steps for data transformation, validation, and mapping.Develop essential components for connecting with various data sources and destinations, including APIs, SQL databases, S3 buckets, and SFTP servers.Update and modify existing data flows within the ETL tool as necessary.Conduct thorough testing and validation to guarantee the accuracy of data transformations, verifications, and final outputs.Create and execute unit and regression tests.Provide post-deployment support and troubleshoot any issues that arise.
At Samba TV, we are redefining the viewing experience by harnessing the power of our proprietary data and cutting-edge technology to track streaming and broadcast video globally. Our mission is to empower media companies to connect with audiences and advertisers to engage viewers, delivering insights that transform how we experience entertainment.We invite a talented Data Engineer to join our dynamic Data Technology team in Warsaw. You will play a crucial role in building and maintaining our data platform that serves the entire organization — supporting everything from data ingestion and analytics to comprehensive reporting. You will work with cutting-edge technologies like AWS, Databricks, BigQuery, and Snowflake to enhance our data infrastructure.As a self-sufficient contributor, you will take ownership of well-defined pipeline components and features, collaborate effectively with your teammates and cross-functional stakeholders, and navigate the complete data lifecycle. We are looking for candidates with 2–4 years of hands-on experience who are proficient in writing production-quality code and are eager to grow their technical expertise.
Join our innovative team at Inetum Polska as a Data Engineer, where you will utilize your data engineering expertise in a fast-paced environment. Your role will be pivotal in ensuring smooth data migration and optimization for cutting-edge AI and ML projects. Don't miss out on the opportunity to contribute to our groundbreaking initiatives!Key ResponsibilitiesData Pipeline Development:Craft, develop, and implement Python-based ETL/ELT pipelines to facilitate data migration from on-premises MS SQL Server to our Databricks instance,Ensure effective ingestion of historical parquet datasets into Databricks.Data Quality & Validation:Establish validation, reconciliation, and quality assurance protocols to guarantee the accuracy and completeness of migrated data,Manage schema mapping, field transformations, and metadata enrichment to standardize datasets,Integrate data governance, quality assurance, and compliance into all migration processes.Performance Optimization:Optimize pipelines for enhanced speed and efficiency, leveraging Databricks capabilities, including Delta Lake when applicable,Oversee resource utilization and scheduling for large dataset transfers.Collaboration:Coordinate closely with AI engineers, data scientists, and business stakeholders to outline data access patterns needed for upcoming AI POCs,Work alongside infrastructure teams to ensure secure connections between legacy systems and Databricks.Documentation & Governance:Maintain comprehensive technical documentation for all data pipelines,Adhere to best practices for data governance, compliance, and security throughout the migration process.
Join inetum2 as a Technical Leader in Data Engineering and play a pivotal role in driving innovative data solutions. As a key member of our team, you will lead projects that harness the power of data to deliver exceptional results for our clients. Your expertise in data engineering and leadership skills will empower you to mentor junior engineers and shape the future of our data initiatives.
About the Role nix is seeking a Data Quality Automation Engineer to join a team supporting a global client in the insurance and automotive sectors. The client operates across ten countries, serving over 1,200 organizations and managing millions of claims each year. This role contributes to a major transformation program, with a focus on cloud-based software and modern design patterns. What You Will Do Define and maintain data quality rules at all stages: ingestion, transformation, and reporting. Validate data in Databricks pipelines. Monitor and test Databricks transformations using PySpark and SQL to confirm data accuracy and completeness. Check that Databricks and Power BI reports reflect accurate, reconciled data. Set up data validation checks for schema, nulls, duplicates, ranges, and referential integrity. Identify, document, and analyze data quality issues, including root causes. Work closely with data engineers and analysts to resolve issues. Develop automated systems for data quality monitoring and alerts. Requirements At least 4-5 years of experience in data analysis, quality assurance, data governance, or a related area. Strong knowledge of Databricks or Spark, including SQL and PySpark. Experience with ETL/ELT pipelines and data transformation tools (such as dbt). Background in validating BI or reporting outputs, with Power BI preferred. Advanced SQL skills for data validation and reconciliation. Familiarity with data quality frameworks or tools (Great Expectations is a plus). Bonus Skills Experience with AWS data stack. Knowledge of data governance or data catalog tools. Exposure to CI/CD practices for data pipelines. Understanding of data lineage and observability tools. What Success Looks Like Fewer data defects in pipelines and reports. Automated data quality checks in place. Transparent tracking and visibility of data issues. Location This position is based in Poland.
Join our dynamic team at inetum2 as a Junior Data Engineer! In this pivotal role, you will be tasked with monitoring production data pipelines, troubleshooting incidents, enhancing system stability, and ensuring seamless daily data operations. If you are eager to grow your skills in data engineering and contribute to impactful projects, we want to hear from you!
Join our dynamic team at spoton as a Software Engineer specializing in Data and Reporting. In this role, you will be pivotal in designing, developing, and maintaining software solutions that enhance our data processing and reporting capabilities. You will collaborate with cross-functional teams to create robust data pipelines and reporting tools that drive business insights.
Role Overview Miratech is hiring a Data Engineer specializing in Azure Databricks for its Warsaw office. This position focuses on building and maintaining data pipelines and architectures that support reliable, high-quality data. Main Responsibilities Design and develop data pipelines using Azure Databricks Maintain and optimize data architectures to ensure availability and reliability Work with cross-functional teams to turn raw data into insights for decision-making
Join Inetum Polska as a Data Engineer, where you will play a pivotal role in the development and optimization of our data infrastructure. Your responsibilities will include:Designing and maintaining efficient processes to aggregate data from diverse sources into our Data Lake.Creating, developing, and refining complex data pipelines to guarantee a reliable flow of information.Establishing frameworks that support the development of data pipelines.Implementing thorough testing frameworks for data pipelines to ensure data integrity and quality.Collaborating with analysts and data scientists to deliver superior quality data solutions.Overseeing data management practices, ensuring security, compliance, and best practices in governance.Exploring and adopting new technologies to enhance data pipeline performance.Integrating and leveraging data from various source systems, including Kafka, MQ, SFTP, databases, APIs, and file shares.
At Lingarogroup, we champion growth through diversity, equity, and inclusion. Our ethical business practices ensure that we uphold the highest standards of equality, fostering a safe and respectful workplace for all. We firmly believe that a diverse workforce drives both personal fulfillment and business success. Our commitment to creating an inclusive community empowers every individual to thrive, irrespective of their background or identity.
Role Overview Devoteam is hiring a Senior / Lead Data Engineer for its 24x7 Data & AI Factory in Kraków. This position takes ownership of key data engineering projects, guiding efforts that support client growth and innovation. What You Will Do Lead data engineering initiatives across multiple projects Work with advanced technologies to design and implement scalable data solutions Collaborate with teams from different disciplines to optimize data pipelines Enhance AI capabilities by ensuring data integrity and reliability Location This role is based in Kraków.
Join our dynamic team at Miratech as a Middle Data Engineer specializing in Azure Databricks. In this role, you will design, develop, and optimize data pipelines and workflows, utilizing your expertise in data engineering to support our innovative projects. You will work collaboratively with cross-functional teams to ensure data integrity and availability, enabling data-driven decision-making across the organization.
Senior Software Engineer, Data Foundations ABOUT THE ROLE At Peloton, we view Data as a Product—a vital asset that drives every member interaction and influences business decisions. Our Datastores team is committed to offering a dependable, secure, and high-performing data persistence layer for application services throughout the organization. Our core principles include: 1. Safeguard the data, 2. Optimize for scalability, speed, and reliability, and 3. Minimize manual effort. We are seeking a Senior Software Engineer with extensive experience in constructing and managing data-intensive systems to join our Datastores team. This position is perfect for a backend engineer who has developed and maintained production systems heavily reliant on databases, caching layers, and data pipelines, and who is eager to advance their knowledge in scalable, cloud-native data infrastructures. You will collaborate at the intersection of application engineering and data platform reliability, working alongside service teams to enhance the storage, access, scalability, and observability of data across Peloton’s ecosystem. YOUR DAILY IMPACT AT PELOTON Your responsibilities will include: Designing, building, and maintaining backend systems that depend on scalable and highly available data persistence layers. Collaborating with service teams to refine database design, enhance query performance, and optimize data modeling. Contributing to automation efforts surrounding infrastructure provisioning using tools like Terraform and Backstage, while improving developer experience by creating self-service tools for databases and caching systems. Focusing on observability, performance insights, and autoscaling strategies for production datastores. Participating in architectural discussions regarding multi-regional data persistence and global scalability. This role merges hands-on backend engineering with significant exposure to cloud data systems and platform reliability challenges. WHAT YOU BRING We are looking for proficient backend engineers who are data-centric—those who have built systems where data performance, reliability, and modeling are crucial. 5+ years of software engineering experience in developing production backend systems. Strong familiarity with relational databases such as PostgreSQL or MySQL, including schema design, indexing, and query optimization. Experience with NoSQL datastores like DynamoDB, Redis, Elasticsearch, or Memcache. Exposure to data pipelines, event-driven architectures, and cloud technologies.
About BoxBox (NYSE: BOX) is a pioneering leader in Intelligent Content Management, enabling organizations to enhance collaboration, manage content lifecycles, secure vital information, and revolutionize business workflows using enterprise AI. Since our establishment in 2005, we have simplified work for prominent global organizations such as JLL, Morgan Stanley, and Nationwide. Our headquarters is located in Redwood City, California, with offices spread across the United States, Europe, and Asia.By becoming a part of Box, you will play a pivotal role in advancing our platform. Content is the core of our operations, driving the flow of billions of files and information across teams, departments, and crucial business processes every day—contracts, invoices, employee records, financials, product specifications, marketing assets, and more. Our mission is to infuse intelligence into the realm of content management, empowering our customers to fundamentally transform workflows across their organizations. With the integration of AI and enterprise content, you will be at the forefront of this significant shift in how the world collaborates. Your RoleAs a Data Engineer III at Box, you will be instrumental in expanding our Data Engineering initiatives. You will contribute to the development of the data platform engineering features and capabilities of our cloud cost management platform.In this role, you will collaborate with a talented team to build data pipelines, support product and analytics team members, data analysts, and data scientists on various data initiatives while ensuring that optimal data delivery architecture is maintained across ongoing projects. Your ResponsibilitiesCollaborate with a high-performing team of data engineers and analysts to pinpoint business opportunities and design scalable data solutions.Develop and take ownership of data pipelines that clean, transform, and aggregate data from diverse sources.Create and sustain optimal data pipeline architecture.Assemble complex data sets that satisfy both functional and non-functional business requirements.Identify, design, and execute internal process enhancements, including automating manual processes and optimizing data delivery.
Join our innovative team as a Senior Data Engineer and play a pivotal role in enhancing our product platform. We are looking for an experienced professional with a strong background in data engineering to design, implement, and maintain robust data solutions.Main Responsibilities:Architect and develop data solutions that elevate our product platform.Define and refine data models tailored for analytical, operational, and predictive applications.Engage in critical data architecture and technical infrastructure decisions.Guarantee high quality, reliability, and maintainability through best practices in CI/CD and automation.Work closely with product owners, developers, and stakeholders to convert business requirements into effective technical solutions.Technologies Utilized: Azure DevOps, Kubernetes, SQL, ETL.
Join Telemedi as an AI Data Engineer!Telemedi is a pioneering company in the healthcare sector, committed to employing programmers, doctors, and experts across various fields to develop cutting-edge solutions that enhance patient care. Our mission is to leverage technology to provide everyone with convenient and immediate access to medical services.In the role of AI Data Engineer, you will collaborate with us to innovate within the telemedicine and insurance industries.Your Responsibilities:Design and implement autonomous workflows that transform raw data into self-validating reports.Map business processes and determine which aspects can be automated using AI.Build and maintain the analytics layer of our telemedicine platform.Replace manual reporting systems in Excel with intelligent, repeatable pipelines.Construct architectures for data validation systems and monitor the quality of AI outputs.The first three responsibilities will constitute 80% of your work time.
Join Talan as a Tech Lead in Data Engineering, where you will spearhead innovative data solutions and lead a dynamic team of engineers. This role is pivotal in shaping our data architecture and driving impactful projects that enhance our clients' data capabilities. You will collaborate closely with cross-functional teams, ensuring the effective implementation of data strategies and technologies.
Join the innovative team at Inetum as a talented Data Engineer! We are seeking individuals with a strong background in data engineering to contribute to our exciting Big Data projects. Preferably, you will have hands-on experience with Databricks and the Spark framework.