DatabricksMountain View, California; San Francisco, California
On-site Full-time
Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Experience Level
Manager
Qualifications
Proven experience in engineering management, preferably in data engineering or related fields. Strong understanding of data pipeline architectures and technologies. Excellent leadership and communication skills, with the ability to inspire and motivate teams. Experience with cloud platforms and big data technologies. Ability to drive projects from conception to completion with a focus on quality and performance.
About the job
Databricks is seeking an experienced Engineering Manager to lead our Pipelines Engine team. In this role, you will oversee the development and optimization of our data pipeline infrastructure, ensuring that we deliver high-performance solutions that meet the needs of our clients. You will collaborate with cross-functional teams to drive innovation and maintain our leadership in the data analytics space.
About Databricks
At Databricks, we are at the forefront of data analytics and machine learning innovation. Our collaborative culture fosters creativity and empowers our employees to make a difference. Join us in our mission to simplify the complexities of big data and help organizations succeed through data-driven decision-making.
Are you ready to take your software engineering skills to the next level? Join Amplitude as a Senior Software Engineer specializing in Data Pipeline. In this pivotal role, you will be responsible for designing, implementing, and maintaining robust data pipelines that drive our analytics platform. You will work alongside a talented team of engineers and contr…
Full-time|$103.5K/yr - $196K/yr|Hybrid|San Francisco
About Our Organization:Welcome to Scribd Inc. (pronounced “scribbed”), where our passion lies in igniting human curiosity through storytelling and knowledge-sharing. We invite you to join our dynamic team as we work towards democratizing the exchange of ideas and empowering collective expertise with our innovative products: Everand, Scribd, Slideshare, and Fable.This job posting represents an established opportunity within our organization.At Scribd, we cultivate a culture where authenticity and boldness thrive. We value open discussions and commitment as we embrace the unexpected, empowering every employee to take initiative while keeping our customers at the forefront.We believe in a balanced approach to work structure, merging individual flexibility with community engagement. Our Scribd Flex program allows employees, in collaboration with their managers, to choose work styles that best suit their needs. This initiative emphasizes the importance of intentional in-person gatherings to foster collaboration and connection. Thus, occasional in-person attendance is a requirement for all Scribd employees, regardless of their remote status.What do we seek in our new teammates? We prioritize candidates who embody “GRIT” – a blend of passion and perseverance towards long-term goals. At Scribd, we encourage a GRIT-driven approach to work, where the ability to set and achieve Goals, deliver Results, contribute Innovative ideas, and positively impact the Team through collaboration is essential.About the Team:Our ML Data Engineering team is at the forefront of metadata extraction, enrichment, and content understanding across all Scribd products. We manage vast volumes of documents and images, ensuring high-quality metadata that enhances content discovery and builds trust among millions of users around the globe.Our systems function on a massive scale, incorporating diverse datasets like user-generated content, ebooks, audiobooks, and more. We operate at the convergence of machine learning, data engineering, and distributed systems, working closely with applied research and product teams to deploy scalable ML solutions.
Full-time|$103.5K/yr - $196K/yr|Hybrid|San Francisco
At Scribd, Inc., we aim to enhance human understanding. Our innovative products—Scribd®, Slideshare®, Everand™, and Fable—empower billions globally to not just access knowledge, but also to apply it and achieve expertise.Company CultureWe foster an environment where our employees can be authentic and courageous, engaging in constructive debates while embracing unexpected challenges. Every team member is encouraged to take initiative, prioritizing customer needs.We understand that optimal performance arises from a mix of personal flexibility and meaningful community interaction. Our Scribd Flex program allows team members to choose their working style and location, while still emphasizing in-person collaborations that enrich our culture. Attendance at occasional in-person events is necessary for all employees, regardless of their location.We seek team members who embody “GRIT”—a blend of passion and perseverance towards long-term objectives. This ethos informs our approach to setting and achieving Goals, delivering impactful Results, fostering Innovation, and enhancing our Team dynamics through collaboration.This posting represents an open position within our organization.About Our Team:Our ML Data Engineering team is responsible for metadata extraction, enrichment, and content comprehension across all Scribd offerings. We handle hundreds of millions of documents and billions of images, providing high-quality metadata to facilitate content discovery and trust for millions of users worldwide.Operating at a massive scale, our systems support diverse datasets, including user-generated content (UGC), ebooks, audiobooks, and more. We collaborate closely with applied research and product teams to deploy scalable machine learning and large language model (LLM)-powered solutions in production.Role Overview:We are looking for a Software Engineer II with robust backend development expertise and a keen interest in addressing complex data challenges at scale. You will design, build, and optimize distributed systems that extract, enrich, and process metadata for a variety of content. This role involves close collaboration with ML engineers, product managers, and cross-functional teams.
About AlembicAlembic is at the forefront of transforming marketing strategies, demonstrating the actual ROI of marketing initiatives. Our cutting-edge Alembic Marketing Intelligence Platform employs advanced algorithms and AI models to address this longstanding challenge effectively. By joining our team, you'll contribute to the development of tools that deliver unparalleled insights into how marketing influences revenue, empowering a growing roster of Fortune 500 companies to make data-driven decisions with confidence.About the RoleIn your role as a Senior Data Engineer at Alembic, you will play a crucial role in our data platform. You will be responsible for creating scalable and dependable data pipelines, optimizing storage solutions, and facilitating both real-time and batch analytics. Collaborating closely with data scientists, software engineers, and product leaders, you will design and implement robust data architectures that propel our mission forward.Key ResponsibilitiesDesign, develop, and maintain scalable ETL pipelines that efficiently ingest, process, and transform extensive volumes of structured and unstructured data.Optimize data storage solutions utilizing modern data lakehouse architectures and industry best practices to enhance cost-effectiveness, performance, and reliability.Collaborate with data scientists and engineers to seamlessly integrate machine learning models and analytical workloads into production environments.Ensure the integrity, quality, and security of data by implementing monitoring, alerting, and governance best practices.Work with cloud-based data warehouses and distributed data processing frameworks to support our data initiatives.Continuously assess and implement innovative technologies to enhance data infrastructure and operational efficiency.What We’re Looking For10+ years of experience in data engineering, software engineering, or a related field.Strong proficiency in SQL and Python for data processing.Experience with contemporary data warehousing and lakehouse solutions (e.g., Iceberg or similar).Expertise in distributed systems and big data technologies (Apache Spark, Hadoop, Kafka, Flink).Hands-on experience with cloud platforms (AWS, GCP, Azure) and related data services.Deep understanding of data management and governance practices.
Full-time|$185K/yr - $235K/yr|On-site|San Francisco
About Stand Insurance Stand Insurance is rethinking how property risks are understood and managed. By combining advanced physics with artificial intelligence, the team models catastrophic risks at the asset level and automates underwriting and risk mitigation before losses happen. Instead of simply delivering insurance, Stand builds a scalable risk engine that aims to deliver real-world impact and stay in markets where others exit. Traditional property insurance often relies on outdated data and manual workflows, accepting damage as a given. Stand takes a different path: simulating real-world catastrophes for individual properties, turning those simulations into actionable steps, and automating operations around those insights. The result is a platform that can underwrite risks others avoid, while reducing operational friction. Role Overview: Machine Learning Engineer – Data Pipeline This role centers on building and maintaining the tools behind Stand’s data annotation pipeline. Areas of focus include computer vision, human-in-the-loop management, quality assurance, and economic optimization. The main goal: increase automation and lower cost-per-policy, while keeping quality high. Early on, work will involve hands-on management of the pipeline, quality checks, and close coordination with the annotation team. As experience grows, the focus will shift to developing advanced data science and machine learning systems, especially around quality instrumentation, automated QA, predictive labeling, and computer vision models. Over time, the role will evolve into shaping a systems-driven, automation-focused framework for the entire annotation lifecycle. Key Responsibilities Pipeline Operations and Reliability Monitor and maintain the daily health of the annotation pipeline Set up escalation protocols and frameworks for categorizing failures Lead the transition from manual to automated operations Quality Instrumentation Design validation systems that align with downstream model metrics Develop anomaly detection models for annotation workflows Automate tasks to cut down on manual QA effort Vendor and Annotator Performance Define and track performance metrics for vendors and annotators Location San Francisco
Full-time|$139K/yr - $223K/yr|On-site|San Francisco, California
About UsAt Aurora, we are on a mission to make self-driving technology safe, swift, and accessible to everyone.The Aurora Driver is set to usher in a new era of mobility and logistics, fostering a future that is not only safer but also more efficient and accessible. Joining Aurora means tackling complex challenges alongside a team of dedicated and talented individuals, enhancing your expertise while broadening your knowledge base. For the latest updates from Aurora, visit aurora.tech or connect with us on LinkedIn. At Aurora, we seek out talented individuals from diverse backgrounds eager to contribute to a transportation ecosystem that enhances road safety, ensures timely delivery of essential goods, and promotes efficient and accessible mobility for all. We are currently on the lookout for a Graphics Pipeline Engineer.Key Responsibilities:Lead the technical execution of cross-functional projects, translating stakeholder needs into robust code while exemplifying engineering best practices.Design and implement foundational Python frameworks, services, and APIs that underpin our synthetic data ecosystem. This is a hands-on role requiring frequent coding and deployment.Champion the adoption and standardization of USD as the foundational data backbone for our pipeline.Serve as the lead developer and subject matter expert for our most intricate pipeline challenges, troubleshooting complex technical issues and engineering scalable solutions.
Full-time|$164K/yr - $227K/yr|On-site|San Francisco, CA, USA
Role overview Chime’s Data Engineering team develops the systems that power data-driven decisions across the company. Senior Data Engineers play a key role in designing and implementing scalable data pipelines and frameworks, making sure analytics remain reliable and well-governed. This work supports teams across Chime as they build new capabilities and improve how data informs business choices. What you will do Build and maintain scalable data pipelines and frameworks to support analytics Create solutions that keep data accessible, accurate, and governed Design workflows for analytics and reporting used throughout the organization Help shape data engineering practices that can influence fintech standards Compensation and benefits The base salary for this Senior Data Engineer position ranges from $164,000 to $227,000. Full-time employees are also eligible for bonuses, equity options, and a comprehensive benefits package. Final salary depends on skills, qualifications, and experience.
Full-time|On-site|Mountain View, California; San Francisco, California
Databricks is seeking an experienced Engineering Manager to lead our Pipelines Engine team. In this role, you will oversee the development and optimization of our data pipeline infrastructure, ensuring that we deliver high-performance solutions that meet the needs of our clients. You will collaborate with cross-functional teams to drive innovation and maintain our leadership in the data analytics space.
Full-time|On-site|CA - San Francisco; WA - Seattle; UT - Cottonwood Heights
Join SoFi as a Senior Software Engineer in our Data Foundations team, where you will play a pivotal role in shaping our data architecture and enhancing our data-driven capabilities. You will work closely with cross-functional teams to develop robust data solutions that empower our business decisions and improve customer experiences.As a Senior Software Engineer, you will leverage your expertise in data engineering, software development, and cloud technologies to build scalable data pipelines and maintain high-quality data infrastructure. Your contributions will directly impact our ability to deliver innovative financial solutions.
Full-time|$150K/yr - $190K/yr|On-site|San Francisco
About Probably GeneticProbably Genetic is revolutionizing the lives of patients with severe and complex diseases. Our advanced data platform empowers drug developers and patient advocacy organizations to create and launch innovative treatments. By leveraging cutting-edge technology, we identify undiagnosed patients online, analyze their conditions using machine learning and home testing, and facilitate compliant communication with them. Our mission is to ensure that patients gain access to diagnoses, clinical trials, and treatments at the earliest opportunity.We are a dedicated team of passionate problem solvers, driven by a purpose that transcends individual interests. By prioritizing patient welfare, we are developing groundbreaking solutions in healthcare, with a roadmap full of innovations in bioinformatics, AI, and drug development. We invite you to join our lean, talented team and contribute to our vision.Probably Genetic has secured multiple funding rounds from top-tier Silicon Valley investors, including Threshold, Khosla, and Y Combinator. We offer competitive salaries, comprehensive benefits, and meaningful equity opportunities for early-stage team members.About the RoleWe are seeking a founding Data Engineer who is enthusiastic about shaping the future of data utilization to enhance patient outcomes. In this pivotal role, you will establish our data engineering architecture and construct the pipelines that drive internal insights and commercial data products. Your contributions will be instrumental in fostering clarity, impact, and growth throughout our organization.What You Will DoCollaborate closely with the Head of Engineering and Head of Product to transform complex data challenges into elegant, scalable solutions.Build reliable, maintainable infrastructure on AWS using Terraform to accommodate our expanding data requirements.Design data tables and pipelines tailored to the specific needs of our customers and internal teams.Implement state-of-the-art data pipelines with built-in observability from day one.Analyze and visualize data using BI tools to facilitate informed business decisions and provide customized insights to clients.Communicate your work and its impact across teams — presenting findings, receiving feedback, and continuously enhancing processes.Who You AreWe are eager to connect with candidates from diverse backgrounds who are committed to learning, growth, and making a meaningful impact. Here are a few attributes that will enable you to thrive in this role:Proficiency in data engineering principles and practices.Experience with cloud platforms, particularly AWS.Strong analytical skills and familiarity with data visualization tools.Ability to collaborate effectively in a team environment.
About World Labs: At World Labs, we are pioneers in building foundational world models that can perceive, generate, reason, and engage with the 3D environment. Our mission is to unlock the full potential of artificial intelligence through spatial intelligence, transforming vision into action, perception into reasoning, and imagination into creativity. We believe that spatial intelligence will pave the way for new forms of storytelling, creativity, design, simulation, and immersive experiences across both virtual and physical realms. Our team is composed of exceptional talent united by a shared curiosity and passion for technology, ranging from AI research to systems engineering and product design. Together, we create a dynamic feedback loop between our cutting-edge research and innovative products that empower our users. Role Overview We are in search of a dedicated 3D Data Pipeline Engineer to design, build, and manage the critical systems that facilitate high-quality 3D data processing, synthetic data generation, and rendering across our suite of products. This hands-on role is ideal for someone enthusiastic about large-scale 3D data, system performance, and establishing reliable data pipelines to enhance our product features. In this position, you will collaborate closely with product engineers, 3D artists, and research scientists to develop efficient, robust, and scalable data pipeline capabilities while ensuring high data integrity and performance in our fast-paced startup environment.
Full-time|Remote|Denver, Colorado, United States; San Francisco, California, United States
Join Checkr as a Senior Data Engineer and play a pivotal role in shaping our data infrastructure and analytics capabilities. In this position, you will collaborate with cross-functional teams to design, develop, and maintain scalable data processing systems that empower our business decisions. If you are passionate about harnessing the power of data and thrive in a dynamic environment, this is the opportunity for you!
Senior Software Engineer - Data AcquisitionOverview:Join the dynamic Data Acquisition team at OpenAI, where we spearhead the data collection efforts essential for powering our advanced model training operations. Our team plays a pivotal role in managing web crawling and GPTBot services, collaborating closely with Data Processing, Architecture, and Scaling teams. We are seeking a talented Senior Software Engineer to enhance our Data Acquisition initiatives.Key Responsibilities:Lead engineering projects focused on data acquisition, including web crawling, data ingestion, and search optimization.Collaborate effectively with cross-functional teams to maintain seamless data flow and system performance.Engage with the legal team to navigate compliance and data privacy regulations.Design and implement robust distributed systems capable of processing petabytes of data.Develop algorithms for efficient data indexing and search functionalities.Build and sustain backend services for data storage, including experience with key-value databases and data synchronization.Deploy solutions in a Kubernetes Infrastructure-as-Code environment while conducting regular system audits.Conduct experiments on data to derive insights that drive system enhancements.Qualifications:Bachelor's, Master's, or PhD in Computer Science or a related discipline.A minimum of 6 years of professional experience in software development.Prior experience with large-scale web crawlers is a significant advantage.In-depth knowledge of large stateful distributed systems and data processing techniques.Expertise in Kubernetes and familiarity with Infrastructure-as-Code practices.A proactive approach to exploring new technologies and methodologies.Strong ability to juggle multiple tasks and adapt to changing priorities.Excellent communication skills, both written and verbal.About OpenAI:OpenAI is at the forefront of artificial intelligence research and deployment, dedicated to ensuring that the benefits of general-purpose AI are shared by all of humanity. We strive to push the boundaries of innovation while adhering to ethical standards.
Full-time|Remote|Denver, CO;San Francisco, CA;New York, NY;Los Angeles, CA;Seattle, WA;Toronto, Ontario, CAN - Remote
Join our team as a Senior Software Engineer focused on our Data Platform at Gusto. In this role, you will leverage your expertise in designing and building scalable data systems that drive business decisions and enhance our platform's capabilities. Your contributions will directly impact our ability to deliver exceptional services to our customers.You will collaborate with cross-functional teams, ensuring that our data architecture is robust, efficient, and aligned with our business goals. If you are passionate about data engineering and are looking for a place to innovate and grow, we want to hear from you!
Full-time|On-site|San Francisco, CA, United States
At Ripple, we are on a mission to transform the way value is exchanged globally, making it as seamless as information transfer. Our innovative crypto solutions empower financial institutions, businesses, governments, and developers, promoting a more equitable financial system while creating opportunities for individuals across the globe. Joining us means being part of an impactful journey where you can hone your skills and collaborate with a supportive team.If you are eager to make a significant impact and explore exciting career advancement opportunities, we invite you to join us in building real-world value.THE WORK: WHAT YOU’LL DO: WHAT YOU'LL BRING: Other common names for this role:
Full-time|$181.2K/yr - $217.5K/yr|On-site|Denver, CO; San Francisco, CA
At Fastly, we empower individuals to connect more effectively with the things they cherish. Our cutting-edge edge cloud platform enables customers to swiftly, securely, and reliably craft exceptional digital experiences by processing, serving, and safeguarding their applications as close to their end-users as possible — right at the edge of the Internet. Tailored for modern internet demands, our platform is programmable and supports agile software development. We proudly serve many of the world's leading companies, including GitHub, Yelp, Paramount, and JetBlue.Join us in our mission to build a more trustworthy Internet.Posting Open Date: Feb. 25, 2026Anticipated Posting Close Date*: March 25, 2026*Please note that this job posting may close early depending on the volume of applications.Role Overview:The Data Reliability team is seeking an experienced Senior Software Engineer to contribute to the development and support of next-generation data storage solutions at Fastly. The ideal candidate will possess expertise in backend and data services within cloud environments, proficiency with configuration and orchestration tools such as Terraform and Kubernetes, and the ability to create internal administration tools using Go and related technologies. Our team plays a vital role in ensuring the infrastructure, orchestration, and reliability of Fastly's most data-intensive applications, utilizing technologies like Terraform, Elasticsearch, ClickHouse, Prometheus, MySQL, and Redis across both cloud and hardware platforms. Your contributions will directly enhance our customers' success by providing product teams with a robust platform for efficient and consistent delivery of high-quality, high-throughput, globally distributed data systems and products. We embrace a distributed work model and value both collaborative and asynchronous communication styles.Key Responsibilities:Deploy, support, and maintain various critical data storage systems, scaling from gigabytes to petabytes.Develop statistics and dashboards to track service-level objectives for these systems.Create and manage tools for configuration, backup, and authenticated access to data systems employing peer review, CI/CD, and both daemon- and container-based deployment strategies.Write high-performance, maintainable, and concise code, actively participating in code reviews to enhance the codebase.
Full-time|$166K/yr - $225K/yr|On-site|San Francisco, California
At Databricks, we are driven by a passion for empowering data teams to tackle the world’s most challenging problems — from transforming transportation to accelerating medical innovations. We achieve this by creating and maintaining the leading data and AI infrastructure platform, enabling our clients to leverage profound data insights for business enhancement. Founded by engineers with a customer-first mentality, we eagerly embrace every opportunity to tackle complex technical challenges, ranging from the design of next-generation UI/UX for data interactions to scaling our services across millions of virtual machines. Our journey has just begun.As a member of the Runtime team at Databricks, you will be instrumental in developing the next generation of distributed data storage and processing systems. These systems will surpass specialized SQL query engines in relational query performance while offering the programming abstractions necessary to support a variety of workloads, from ETL to data science.Example projects include:Apache Spark™: Contribute to the de facto open-source standard framework for big data.Data Plane Storage: Develop reliable and high-performance services and client libraries for managing vast amounts of data within cloud storage backends like AWS S3 and Azure Blob Store.Delta Lake: Design a storage management system that merges the scalability and cost-effectiveness of data lakes with the performance and reliability of data warehouses, providing features like ACID transactions and time travel.Delta Pipelines: Simplify the orchestration and operation of numerous data pipelines, enabling clients to deploy, test, and upgrade pipelines effortlessly.Performance Engineering: Create the next-generation query optimizer and execution engine that is fast, scalable, and robust.
About CondorAt Condor, we are transforming the financial infrastructure of clinical development. While substantial investments are made annually to discover and develop new therapies, the processes behind these advancements often remain outdated and disconnected. Our mission is to bridge this gap, creating a cohesive system that integrates clinical operations, vendor activities, and financial data into a real-time intelligence layer. This empowers R&D and finance teams with the insights they need to make informed decisions.Our AI-driven, pharma-native infrastructure is designed to scale industry standards that we have helped shape alongside major partners. We facilitate prediction, control, and execution in some of the most complex R&D environments globally.As we continue to gain the trust of enterprise teams, we are now focused on the critical task of scaling our operations in a high-stakes environment.Condor is a rapidly growing company, backed by leading institutional investors such as Felicis and 645 Ventures, collaborating with top 200 biopharma companies. This is a unique opportunity to contribute to the infrastructure that influences how new therapies reach patients.The RoleWe are seeking a Senior Backend and Data Platform Engineer to play a key role in developing the foundational data infrastructure for Condor’s financial intelligence platform. This position is pivotal in turning complex clinical and financial data into actionable intelligence that enterprise biopharma teams can rely on.In this role, you will be responsible for designing and managing the core data foundations that underpin Condor’s financial engine and AI capabilities. Your work will involve modeling intricate, high-stakes data, constructing reliable data pipelines and services, and ensuring that product features and intelligence workflows function with precision, consistency, and scalability. The systems you develop will directly support critical finance and operational applications.This hands-on, senior engineering position provides you with significant ownership. You will engage with backend services, data pipelines, and APIs, bringing features from concept to production. You will define necessary data schemas, transformations, and architectural patterns that become essential as our platform evolves. Although your primary focus will be on backend and data engineering, you will also be encouraged to work across the stack to ensure seamless integration of data and intelligence.
Full-time|$160K/yr - $210K/yr|On-site|New York, NY, San Francisco, CA or Los Angeles, CA
The OpportunityAt Enigma, we are at a pivotal moment of growth, receiving enthusiastic feedback from our clients regarding the substantial value our product provides. This feedback drives our urgent need to effectively present the capabilities of our small business data as we expand our sales and marketing efforts.The RoleWe are seeking a skilled Senior Software Engineer to join our dynamic API and Data Delivery Team. In this position, you will be instrumental in designing, constructing, and maintaining essential systems for processing and delivering vast datasets, collaborating with both teammates and clients to address impactful, real-world challenges.What You’ll DoDevelop scalable, highly available, and high-throughput systems deployed in cloud environments.Tackle challenges involving containers, cloud infrastructure, and infrastructure as code (primarily using Docker, AWS, and Terraform).Exhibit a proactive attitude that embraces challenges, regardless of their size.Take pride in writing clean, well-tested, and maintainable code.Thrill in collaborating as part of a motivated and cohesive team.Identify and address problems that may go unnoticed by others.Be driven to create tangible impacts for our customers.Inspire your colleagues to excel while fostering a collaborative and supportive team culture.Manage responsibilities related to architecture, design decisions, hands-on implementation, team organization, and technical mentorship.What Makes This Role Exciting?Impact: Your technical expertise and decision-making will directly influence our customers and the success of our product, affecting critical choices at multi-billion dollar firms.Technical Challenge: Engage with cutting-edge technologies surrounding databases, information retrieval, distributed systems, microservices, elastic scaling, data pipelines, and more.Ownership: The API & Data Delivery team is addressing some of the world's most complex challenges. The ideal candidate is an engineer eager to expand their responsibilities and collaborate with the team to create significant technical and business impacts.
Full-time|$130K/yr - $250K/yr|On-site|San Francisco, CA
Peregrine Technologies builds AI-powered tools for public safety, government, and private organizations. Our platform connects fragmented data, giving clients immediate access to information that supports better decisions and outcomes. Today, we serve hundreds of clients across more than 30 states and two countries, reaching over 125 million people. Backed by leading Silicon Valley investors, we are expanding into enterprise and international markets. About the Engineering Team Peregrine’s engineers focus on building solutions with empathy for users. Understanding real-world product use is central to our approach, and engineers work closely with onsite colleagues to learn about the varied needs our clients face. Collaboration and hands-on problem solving are core to how we work. Role Overview: Senior Software Engineer, Data Governance This role sits within our core engineering group. The team tackles projects such as enabling real-time collaboration on complex maps and building backend systems that handle billions of data points at scale. The Data Governance team designs and builds services, systems, and product features that help clients manage their data assets from start to finish within Peregrine. The team’s work centers on secure, precise data access and strong audit controls. Location San Francisco, CA
Apr 16, 2026
Sign in to browse more jobs
Create account — see all 7,684 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.