Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Experience Level
Senior
Qualifications
Responsibilities:Design, build, and manage essential data infrastructure platforms, including lakehouse, replication, orchestration, and distributed computing systems. Ensure optimal availability, scalability, and performance of platform services that support enterprise-level data workloads. Contribute to architectural decisions that align with modern data platform standards. Collaborate with engineering, analytics, and product teams to facilitate data platform adoption and migration efforts. Enhance developer productivity through automation, tooling, and AI-driven workflows. Participate in on-call rotations and lead troubleshooting for complex production issues. Create clear and comprehensive documentation and operational runbooks to support platform reliability and usage. Who You Are:A minimum of 5 years of experience in infrastructure, platform engineering, or data engineering roles. Demonstrated ability to design and implement data infrastructure solutions. Strong skills in data architecture, orchestration, and distributed systems. Experience collaborating across teams to achieve common goals.
About the job
For over three decades, Angi has been at the forefront of the home services sector, creating a dynamic platform where homeowners and professionals thrive through quality work.
Homeowners can reliably connect with skilled professionals, while our partners benefit from a robust platform that helps them secure meaningful work opportunities. We are committed to fostering a workplace that our employees are proud to call home.
Angi Overview:
Originally founded in 1995 as Angie’s List, rebranded in 2021
A global enterprise with 9 brands across 8 countries and a diverse workforce
Trusted by homeowners for over 300 million home projects and counting
Team Overview:
We are looking for a Senior Data Engineer to join our Data Infrastructure team. This role is pivotal in building and maintaining the core platforms that facilitate data processing, storage, and analytics across the organization. You will focus on enhancing our lakehouse architecture, data replication systems, and orchestration frameworks to ensure scalable, reliable, and efficient data workflows.
This position is remote, but we prefer candidates located in the Eastern Time Zone to align with our team's working hours.
About Angi
Angi has been a leader in the home services industry for over 30 years, connecting homeowners with reliable professionals. With a global presence and a commitment to excellence, Angi is a place where you can thrive in your career and make a meaningful impact.
For over three decades, Angi has been at the forefront of the home services sector, creating a dynamic platform where homeowners and professionals thrive through quality work.Homeowners can reliably connect with skilled professionals, while our partners benefit from a robust platform that helps them secure meaningful work opportunities. We are committed to f…
Full-time|$115K/yr - $205K/yr|Remote|New York - Remote
At Angi®, our mission for the past 30 years has been simple: to ensure that jobs are done right. We connect homeowners with trustworthy professionals who possess the necessary skills, while simultaneously linking these pros with homeowners seeking the jobs they desire.Angi at a glance:Homeowners have relied on Angi for over 300 million projects.We cover more than 1,000 home service tasks.Our team consists of 2,800 dedicated employees worldwide.Why join Angi:Angi® is on a mission to redefine the home services industry, fostering an environment where homeowners, professionals, and employees all benefit from a greater number of jobs completed successfully.For homeowners, our platform offers a dependable way to locate skilled professionals. For professionals, we act as a trustworthy business partner, helping them discover the work they want when they want it. For our employees, we provide an exceptional workplace that they can proudly call home. We look forward to welcoming you!About the team:We are currently searching for a Senior Data Engineer to join our Data Infrastructure team. This individual will play a pivotal role in constructing and managing the foundational platforms that facilitate data processing, storage, and analytics throughout our organization. The focus of this role will be on advancing our lakehouse architecture, data replication systems, and orchestration frameworks, all while ensuring scalable, reliable, and efficient data workflows.Please note, although this role is remote, we are a global company seeking candidates located in the Eastern Time Zone to align with our team's working hours.
Full-time|$129K/yr - $209K/yr|On-site|Waltham, Massachusetts, United States
Join Our TeamWe invite you to become a vital part of Evolv as a Senior Data Infrastructure Engineer within our Machine Learning & Sensors organization. This pivotal role entails the design, construction, and maintenance of robust, secure, and scalable data pipelines that drive our AI/ML research and production systems. You will take charge of the complete data lifecycle—from ingestion across thousands to millions of edge devices, through cloud processing, to a centralized data factory that supports model training, evaluation, and ongoing enhancement.Data is at the core of our mission to revolutionize AI-based weapon detection systems. Your expertise will ensure seamless data flow across various geographies, devices, and cloud systems, while adhering to stringent standards for quality, privacy, security, and scalability. This position is perfect for someone who is passionate about the intersection of distributed systems, cloud pipelines, and ML-driven data requirements.Success in the Role: Your First YearIn the first 30 days:Gain an in-depth understanding of our existing edge-to-cloud data pipelines and deployment environments.Evaluate current data ingestion processes, governance frameworks, and cloud infrastructure.Identify challenges related to data reliability, quality, and operational scalability.Establish rapport with AI/ML, data science, field operations, and cloud engineering teams.Design and prototype both cloud and edge data processing pipelines.Within the first three months:Implement enhancements to critical ingestion, validation, and processing pipelines.Deploy scalable data pipelines using AWS components such as S3, EC2, Lambda, Glue, Step Functions, and SageMaker integrations.Develop automated validation workflows to identify data corruption, missing metadata, or malformed data.Create automated model evaluation, training, and improvement pipelines to accelerate experimentation.Collaborate with field operations to enhance data reliability, observability, and coverage.By the end of the first year:Oversee the entire lifecycle of mission-critical data pipelines that support AI/ML research and production.Architect advanced edge-to-cloud data systems capable of scaling across millions of devices.Establish and enforce data governance frameworks, including retention, access control, privacy, and lineage.Enable ML teams to quickly conduct experiments with high-quality, discoverable, versioned datasets.
About ZaimlerIn a world where AI agents struggle to reason over fragmented data, Zaimler emerges as the solution. Our mission is to unify disparate enterprise data across countless systems, providing a shared context, meaning, and structure. This transformation is essential as we transition from traditional copilots to fully autonomous agents, necessitating a new infrastructure layer that we are dedicated to building.At Zaimler, we are pioneering context infrastructure for the agentic era—a platform that autonomously discovers domain knowledge, maps intricate relationships, and equips AI agents with the semantic understanding required for precise and scalable operations. Envision knowledge graphs that facilitate real-time inference, tailored for systems that need to reason rather than merely retrieve data.Founded by industry veterans Biswajit Das (former VP Engineering at Truera and Chief Architect at Visa) and Sofus Macskassy (ex-Director of Engineering at LinkedIn), who notably built one of the largest knowledge graphs in production, Zaimler is a small, senior team at the seed stage, collaborating with major enterprises in sectors like insurance, travel, and technology. If you are passionate about creating the infrastructure that will support the next decade of AI advancements, we are eager to connect with you.The RoleWe are in search of a talented Data Infrastructure Engineer to establish the foundational distributed data layer that will power our semantic platform. In this role, you will be responsible for designing, building, and scaling systems that enable high-throughput data ingestion, transformation, and real-time processing.
At Judgment Labs, we specialize in developing cutting-edge infrastructure for Agent Behavior Monitoring (ABM). Unlike conventional observability tools that merely track exceptions and latency, our ABM technology identifies behavioral anomalies, such as instruction drifts and context retrieval losses, in large-scale production settings.Our solutions are trusted by numerous teams working on autonomous agents to gain insights into system behavior post-deployment. Rather than simply reacting to incidents, our clients analyze patterns across conversations and workflows, correlate regressions with specific interaction types, and identify critical points of reliability failure. Recently, we secured over $30 million across two funding rounds from notable investors like Lightspeed, SV Angel, and Valor Equity Partners.The Role:We are seeking a Senior Data Infrastructure Engineer to architect and enhance the real-time data pipelines essential for robust agent behavior analysis at scale. This position plays a vital role in processing hundreds of thousands of traces per second, executing LLM-based scoring and clustering in near-real-time, and ensuring low-latency query performance, which allows teams to monitor agent behavior as it unfolds. Ideal candidates will have experience designing petabyte-scale data systems, optimizing OLAP database performance, and managing the full data lifecycle from ingestion to analytics.What You'll Do:Design and automate large-scale, high-performance streaming and batch data processing systems to support Judgment's behavioral analysis products.Collaborate closely with infrastructure and backend teams to enhance scalability, data governance, and operational efficiency.Promote best practices in software engineering for data infrastructure at scale.Uphold high standards for data quality and engineering: ensuring reliability, efficiency, documentation, testability, and maintainability.Craft data models for optimal storage and access, ensuring efficient data flows to meet critical product requirements.Enhance OLAP database performance through careful schema design, partitioning strategies, storage optimization, and access pattern analysis.
Stage is looking for an Infrastructure Data Engineer to join the team in Boston, MA. This position centers on building and maintaining the systems that move and organize data, making analytics and business intelligence possible across the company. Role overview The Infrastructure Data Engineer will design, implement, and support scalable data infrastructure. The work ensures that business needs are met as the company grows, and that analytics teams have reliable data to inform decisions. What you will do Design and implement data infrastructure to meet evolving business requirements Build and maintain data pipelines used for analytics and reporting Support business intelligence by ensuring data systems are reliable and accessible Requirements Stage is seeking candidates who have a strong interest in data engineering and are motivated to make a real impact on team and company goals.
Peregrine Technologies, backed by top-tier Silicon Valley investors, empowers public safety organizations, government entities at all levels, and private institutions to effectively tackle societal challenges with unmatched speed and precision. Our innovative AI-driven platform transforms fragmented data into actionable insights, enabling rapid and informed decision-making that enhances outcomes across the board. Currently, we serve hundreds of clients in over 30 states and two countries, impacting more than 125 million lives as we broaden our reach into enterprise solutions and international markets.Our Engineering TeamWe are a team that prioritizes empathy in our engineering solutions. Understanding how our users interact with our products is essential to our process. Engineers will have the chance to collaborate closely onsite, gaining insights into the diverse use cases our platform addresses.We cherish both ownership and teamwork—you will take full accountability for significant features while working alongside fellow engineers to see them through to completion. We believe humility and empathy are crucial for crafting effective solutions, and you will engage directly with our deployment team and users to iterate and resolve their challenges. Creativity and determination are vital as we pursue our ambitious goals.Your RoleWe are seeking a Staff Data Infrastructure Engineer to join our dynamic team, where you will have substantial ownership over the data ecosystem that supports all of Peregrine's operations. You will design and develop systems that manage, store, and deliver vast amounts of real-time operational data, enabling our customers to make crucial decisions quickly and confidently.This position is ideal for a seasoned individual contributor who excels at solving complex technical challenges and possesses the expertise to influence foundational infrastructure strategies. You will engage with a variety of intricate issues, including:Creating and managing a high-throughput, real-time data integration platform across varied customer environments.Developing a scalable open table format layer for dependable data storage at a petabyte scale.Building and fine-tuning distributed data processing pipelines utilizing Apache Spark and related streaming technologies.Enhancing performance, reliability, and cost-effectiveness throughout the entire data infrastructure stack.Collaborating with platform and product engineering teams to establish data contracts, schemas, and integration pathways.
Join our innovative team at alljoined as a Data Infrastructure Engineer where you will play a pivotal role in shaping our data architecture and ensuring the reliability and efficiency of our data systems. You will collaborate with cross-functional teams to design and implement scalable data solutions that empower data-driven decision-making.
Komodo Health is looking for a Senior Data Engineer to strengthen its data foundations. This fully remote role in the United States centers on building and maintaining the systems that support data-driven work across the company. Role overview This position focuses on designing, developing, and maintaining data pipelines and infrastructure. The goal is to make sure data remains accurate and accessible for teams throughout the organization. Collaboration Senior Data Engineers at Komodo Health work closely with colleagues from different departments. Cross-functional teamwork is essential to understand data needs and to deliver solutions that support business decisions. Key responsibilities Design and develop data pipelines and systems Maintain and improve data infrastructure Support data accuracy and accessibility Collaborate with teams across the organization
About Us: At Carefeed, we empower senior living and long-term care providers with a comprehensive platform that streamlines operations, enhancing the experience for both staff and residents. Our innovative solution replaces outdated paper processes, phone calls, and disparate systems with an integrated digital approach, allowing teams to focus on what truly matters—caring for residents and their families.We seamlessly integrate with existing EHR and HR systems, reducing operational strain while keeping communities organized. Carefeed is designed for ease of use, functionality, and the realities of multi-community care, helping providers maintain efficiency and confidence in their operations.With thousands of communities in the US and Canada relying on us, Carefeed is dedicated to supporting organizations in delivering exceptional experiences for residents, families, and caregivers alike.About the Opportunity: We are seeking a Senior Data Engineer to spearhead the development of our data infrastructure and enhance our developer tools. In this pivotal role, you will design and implement robust data systems, develop tools that improve developer productivity, and create the foundational technology for impactful data-driven products.This position is perfect for an experienced data engineer ready to transition into infrastructure management. Your primary project will involve designing and constructing a central data lake for 2026, alongside the necessary pipelines (e.g., SQS, Kafka) to support it. As the inaugural data engineer at Carefeed, you will play a key role in guiding design and tooling decisions and executing those plans.As part of our Infrastructure team, you will also engage in standard infrastructure tasks, including observability and deployments.
Join Weave as a Senior Platform Engineer and tackle exciting challenges within our dynamic and self-driven teams. We seek engineers passionate about both technical and creative contributions, eager to make a significant impact.As part of a talented group of developers, you will delve into the realms of distributed backend systems, data management, scalability, and continuous improvement. Your skills will be instrumental in driving new initiatives and enhancing existing projects, ensuring that our data is more accessible and user-friendly.Our agile, cross-functional teams consist of product owners, backend and frontend developers, and DevOps professionals. Each team operates with a high level of autonomy, empowered to make decisions that align with Weave's mission and values.Your contributions will directly influence our customers' experiences with Weave, as you collaborate with a highly skilled team to achieve diverse objectives and foster our exceptional company culture.The Data Platform Team is dedicated to facilitating product innovation by simplifying the process for developers to create applications that leverage extensive datasets. Our data platform supports numerous core Weave products and features, including auto scheduling, AI/ML capabilities, and real-time notifications. This year, we are particularly focused on enhancing data usability, improving the reliability of our core systems, and refining our tools for seamless security and compliance. This role will require innovative thinking and problem-solving.This position can be performed either locally or remotely within the US.Reports to: Engineering Manager
Full-time|$123.7K/yr - $254.7K/yr|Remote|San Francisco, CA, US; Remote, US
About tvScientific tvScientific is the pioneering CTV advertising platform specifically designed for performance marketers. We harness extensive data and advanced technologies to automate and enhance TV advertising, ultimately driving measurable business results. Our platform seamlessly integrates media buying, optimization, measurement, and attribution into a single, efficient solution. Developed by industry veterans with deep expertise in programmatic advertising, digital media, and ad verification, our CTV performance platform offers advertisers a reliable avenue to expand their business. As a Senior Data Engineer at tvScientific, you will play a crucial role in establishing the robust data infrastructure that supports our data-intensive operations. You will work in collaboration with cross-functional teams to refine our core data pipelines, ensuring efficient scaling as we grow, and optimizing data storage solutions. This individual contributor role requires you to define and execute a strategic vision for data engineering within the company. Key Responsibilities: Develop and implement a robust data infrastructure in AWS, utilizing Spark with Scala. Enhance our core data pipelines to efficiently accommodate our significant growth. Optimize data storage solutions in appropriate engines and formats. Collaborate with cross-functional teams to design data solutions that align with business objectives. Construct fault-tolerant batch and streaming data pipelines. Leverage and optimize AWS resources while scaling design. Work closely with Data Science and Product teams to achieve collective goals. Success Metrics: Successful establishment of scalable and efficient data infrastructure. Timely delivery and optimization of data assets and APIs. High attention to detail in the implementation of automated data quality checks. Effective collaboration with cross-functional teams. What We’re Looking For: Proven experience in production data engineering. Expertise in Spark and Scala, with a track record of building data infrastructure using these technologies. Familiarity with data lakes, cloud warehouses, and various storage formats. Strong proficiency in AWS services. Advanced SQL skills for data manipulation and extraction. Exceptional written and verbal communication abilities. Bachelor's degree in Computer Science or a related discipline. Nice to have: Experience with additional data processing frameworks.
Founded in 2007, Airbnb has transformed the way people experience travel, connecting over 5 million hosts with more than 2 billion guests worldwide. Our platform enables unique stays and authentic experiences, fostering connections with local communities.The Team You Will Join:As a pivotal member of the Data Warehouse Infrastructure team, you will help shape the backbone of Airbnb's big data capabilities, enabling hundreds of engineers to efficiently collect, manage, and analyze vast amounts of data. We leverage cutting-edge open-source technologies such as Hadoop, Spark, Trino, Iceberg, and Airflow.Typical Responsibilities:Design and architect Airbnb's next-generation big data compute platform to enhance data ETL, analytics, and machine learning efforts.Oversee the platform's operations, focusing on improving reliability, performance, observability, and cost-effectiveness.Create high-quality, maintainable, and self-documenting code while engaging actively in code review processes.Contribute to open-source projects, making a significant impact on the industry.
At Rhoda AI, we are pioneering the development of a comprehensive technology stack for the future of humanoid robotics. Our focus ranges from high-performance, software-defined hardware to cutting-edge foundational models and video world models that govern these systems. Our robots are engineered as versatile generalists, adept at navigating complex, real-world scenarios that extend beyond conventional training environments. Collaborating at the forefront of large-scale learning, robotics, and systems, our research team comprises distinguished experts from renowned institutions such as Stanford, Berkeley, and Harvard. With an impressive funding of over $400 million, we are committed to substantial investments in research and development, hardware innovation, and the scaling of manufacturing processes to bring our vision to life.Position OverviewWe are currently seeking a Senior ML & Data Infrastructure Engineer to take ownership of and enhance our data model training pipeline. This role encompasses the entire lifecycle, from raw data ingestion and storage to sophisticated indexing, retrieval, and throughput optimization at an unprecedented scale.Key ResponsibilitiesDesign, develop, and scale a robust data infrastructure capable of processing and managing billions of video clips while ensuring reliability, low latency, and cost-effectiveness.Create and optimize large-scale storage solutions, including cloud object storage and databases, tailored for multimodal datasets.Develop high-performance indexing and retrieval systems to facilitate rapid dataset querying, filtering, and iteration for both research and production applications.Establish observability frameworks for data pipelines that encompass monitoring, alerting, failure recovery, and performance enhancements.Implement intelligent workload distribution and throughput enhancements across distributed compute and storage infrastructures.Oversee data artifacts, versioning, and lineage to guarantee reproducibility and traceability throughout training cycles.Create user-friendly internal interfaces and lightweight tools that empower researchers and engineers to explore, query, and analyze extensive datasets efficiently.Facilitate the integration and scalable deployment of vision-language models (VLMs) within data pipelines for purposes such as screening, enrichment, or metadata generation.QualificationsA minimum of 5 years of experience in data infrastructure, distributed systems, machine learning infrastructure, or a closely related field.Proven expertise in developing and managing large-scale data pipelines and storage solutions.Strong programming skills in languages such as Python, Java, or Scala, and proficiency with data processing frameworks.Experience with cloud-based storage solutions and databases, as well as knowledge of multimodal data management.Ability to work collaboratively in a fast-paced, innovative environment.
Full-time|$200K/yr - $275K/yr|On-site|San Francisco, CA
At Peregrine Technologies, a company backed by top-tier Silicon Valley investors, we empower public safety organizations, state and local governments, federal agencies, and private-sector entities to tackle societal challenges with unparalleled speed and precision. Our cutting-edge AI-enabled platform transforms fragmented and isolated data into actionable operational intelligence, delivering crucial insights that enhance decision-making processes and improve outcomes across various scenarios. Currently, we proudly serve hundreds of clients across over 30 states and two countries, impacting more than 125 million lives, and we are poised for further growth as we expand into the enterprise sector and international markets.TeamWe believe that empathy is key to enhancing our solutions. Our engineering team prioritizes understanding how users interact with our products, which guides us in finding the best solutions. You'll have the chance to collaborate closely with our onsite team to explore the diverse use cases that Peregrine addresses.We value both ownership and teamwork. In this role, you will be responsible for significant features while working alongside fellow engineers to bring them to fruition. We hold humility and empathy in high regard as essential traits for crafting effective solutions, and you will engage directly with our deployment team and users to iterate on problem-solving. Creativity and resilience are vital as we pursue our vision.RoleWe are in search of a Staff Data Infrastructure Engineer to join our dynamic team. In this role, you will take full ownership of the data layer that is foundational to all of Peregrine's operations. You will design and construct the systems responsible for ingesting, storing, and serving vast amounts of real-time operational data, empowering our clients to make critical decisions quickly and confidently.This senior individual contributor position is ideal for someone who excels at tackling complex technical challenges and possesses the experience and judgement necessary to influence key infrastructure decisions. You will engage with a variety of intricate challenges, including:Designing and managing a high-throughput, real-time data integration platform across diverse customer environments.Architecting a scalable open table format layer to ensure reliable data storage at petabyte scale.Building and optimizing distributed data processing pipelines using Apache Spark and related streaming technologies.Enhancing performance, reliability, and cost efficiency across the entire data infrastructure stack.Collaborating with platform and product engineering teams to define data contracts, schemas, and integration pathways.
Full-time|$153K/yr - $222K/yr|On-site|Sunnyvale, California, United States
About Applied IntuitionApplied Intuition, Inc. is at the forefront of advancing physical AI technologies. Established in 2017 and currently valued at $15 billion, this Silicon Valley powerhouse is dedicated to creating the essential digital infrastructure that empowers intelligence in every moving machine globally. Our solutions cater to key sectors including automotive, defense, trucking, construction, mining, and agriculture, with a focus on tools and infrastructure, operating systems, and autonomy. Trusted by 18 of the top 20 global automakers, along with the United States military and its allies, Applied Intuition is headquartered in Sunnyvale, California, with a global presence in cities including Washington, D.C.; San Diego; Ft. Walton Beach, Florida; Ann Arbor, Michigan; London; Stuttgart; Munich; Stockholm; Bangalore; Seoul; and Tokyo. Discover more at applied.co.Our company thrives on in-office collaboration, and we expect our employees to primarily work from their respective Applied Intuition offices five days a week. We understand the need for flexibility, allowing for responsible management of schedules, including occasional remote work, starting the day with morning meetings from home, or leaving early to accommodate family commitments.About the RoleWe are seeking talented infrastructure engineers with a deep understanding of scaling open-source data infrastructure to join our Data & ML Infrastructure group. This dynamic role involves engaging with the entire data lifecycle — from collection, ingestion, and storage to querying and retrieval. You will collaborate closely with various business units to design and develop both internal and external products. Managing vast amounts of data to meet the demands of Applied Intuition's platform is critical, and we need a proactive individual who can actively support our data products and verticals across the organization. At Applied Intuition, we encourage our engineers to take ownership of technical and product decisions, actively engage with both internal and external users for feedback, and contribute to a vibrant, collaborative team culture.
Join our dynamic team at Suno as a Senior / Staff Software Engineer in Data Infrastructure. In this role, you will be instrumental in shaping the future of our data systems, ensuring they are robust, scalable, and efficient. Collaborate with cross-functional teams to design and implement innovative solutions that drive our business forward.
Roblox Corporation seeks a Senior Software Engineer focused on Data Infrastructure and Safety in San Mateo, CA. This position plays a key part in maintaining the reliability and performance of the Roblox platform, with a strong focus on user protection and a secure environment. Role overview This engineer will design and build scalable data infrastructure to support Roblox’s continued growth. The work centers on improving data quality and reliability, ensuring the platform remains robust as user numbers increase. Collaboration with teams from various disciplines is essential to identify, investigate, and resolve safety-related issues. System architecture decisions made in this role will directly influence user safety and experience. Responsibilities Develop and implement scalable data infrastructure solutions for the Roblox platform Enhance data quality and reliability across systems Work with cross-functional teams to address and resolve safety issues Contribute to architectural decisions that impact user safety and overall experience Requirements Significant experience in software development, data management, and system architecture Proven ability to design solutions that scale with the platform’s growth Strong collaboration skills, especially for addressing safety concerns across teams This role directly influences the safety and experience of millions of Roblox users, supporting the company’s ongoing commitment to a secure and engaging platform.
Role overview FullStory is looking for a Senior Software Engineer to join the Data Infrastructure and AI team in Atlanta. This role centers on building and maintaining the systems that power AI-driven products. What you will do Design and implement data solutions that enable AI features Collaborate with teammates to strengthen and refine data architecture Focus on making data systems scalable, reliable, and high-performing Team and impact Work alongside engineers committed to creating solid foundations for AI capabilities. The team’s efforts directly influence how FullStory provides insights and value to its customers.
About DatologyAIAt DatologyAI, we believe that the quality of training data is vital for the performance of AI models. Our innovative data curation suite is designed to automatically optimize petabytes of data, ensuring that your models are trained on the most relevant and effective datasets. By utilizing our curated data, users can experience training times that are 7-40 times faster and enhance model performance as if they had trained on more than 10 times the amount of raw data, all while reducing deployment costs significantly.With $57.5 million raised across two funding rounds, our esteemed investors include Felicis Ventures, Radical Ventures, Amplify Partners, Microsoft, and AI pioneers such as Geoff Hinton, Yann LeCun, and Jeff Dean. Our expert team is dedicated to simplifying the complex task of data curation, empowering anyone to train their models effectively on their own data.This position is located in Redwood City, CA, and we work in-office four days a week.
Mar 20, 2025
Sign in to browse more jobs
Create account — see all 72,985 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.