MercurySan Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States
Remote Full-time
Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Experience Level
Senior
Qualifications
The ideal candidate will have:A strong background in infrastructure engineering, with at least 5 years of experience in designing and managing cloud-based systems. Proficiency in scripting and programming languages such as Python, Bash, or Go. Experience with CI/CD pipelines and infrastructure as code (IaC) tools, such as Terraform or Ansible. A deep understanding of networking, security, and database management. Excellent problem-solving skills and the ability to work collaboratively in a team environment.
About the job
Join Mercury as a Senior Infrastructure Engineer, where you will be pivotal in shaping the infrastructure that supports our innovative financial solutions. You will work closely with cross-functional teams to design, implement, and maintain scalable and reliable infrastructure systems. This role is ideal for individuals who thrive in a fast-paced environment and are passionate about leveraging technology to drive business success.
About Mercury
Mercury is a cutting-edge financial technology company that empowers startups with innovative banking solutions. We focus on providing our clients with the tools they need to succeed in a fast-evolving digital landscape. Join us to be part of a dynamic team that values creativity, collaboration, and customer-centric solutions.
At Judgment Labs, we specialize in developing cutting-edge infrastructure for Agent Behavior Monitoring (ABM). Unlike conventional observability tools that merely track exceptions and latency, our ABM technology identifies behavioral anomalies, such as instruction drifts and context retrieval losses, in large-scale production settings.Our solutions are trust…
Join our innovative team at alljoined as a Data Infrastructure Engineer where you will play a pivotal role in shaping our data architecture and ensuring the reliability and efficiency of our data systems. You will collaborate with cross-functional teams to design and implement scalable data solutions that empower data-driven decision-making.
Founded in 2007, Airbnb has transformed the way people experience travel, connecting over 5 million hosts with more than 2 billion guests worldwide. Our platform enables unique stays and authentic experiences, fostering connections with local communities.The Team You Will Join:As a pivotal member of the Data Warehouse Infrastructure team, you will help shape the backbone of Airbnb's big data capabilities, enabling hundreds of engineers to efficiently collect, manage, and analyze vast amounts of data. We leverage cutting-edge open-source technologies such as Hadoop, Spark, Trino, Iceberg, and Airflow.Typical Responsibilities:Design and architect Airbnb's next-generation big data compute platform to enhance data ETL, analytics, and machine learning efforts.Oversee the platform's operations, focusing on improving reliability, performance, observability, and cost-effectiveness.Create high-quality, maintainable, and self-documenting code while engaging actively in code review processes.Contribute to open-source projects, making a significant impact on the industry.
Full-time|$123.7K/yr - $254.7K/yr|Remote|San Francisco, CA, US; Remote, US
About tvScientific tvScientific is the pioneering CTV advertising platform specifically designed for performance marketers. We harness extensive data and advanced technologies to automate and enhance TV advertising, ultimately driving measurable business results. Our platform seamlessly integrates media buying, optimization, measurement, and attribution into a single, efficient solution. Developed by industry veterans with deep expertise in programmatic advertising, digital media, and ad verification, our CTV performance platform offers advertisers a reliable avenue to expand their business. As a Senior Data Engineer at tvScientific, you will play a crucial role in establishing the robust data infrastructure that supports our data-intensive operations. You will work in collaboration with cross-functional teams to refine our core data pipelines, ensuring efficient scaling as we grow, and optimizing data storage solutions. This individual contributor role requires you to define and execute a strategic vision for data engineering within the company. Key Responsibilities: Develop and implement a robust data infrastructure in AWS, utilizing Spark with Scala. Enhance our core data pipelines to efficiently accommodate our significant growth. Optimize data storage solutions in appropriate engines and formats. Collaborate with cross-functional teams to design data solutions that align with business objectives. Construct fault-tolerant batch and streaming data pipelines. Leverage and optimize AWS resources while scaling design. Work closely with Data Science and Product teams to achieve collective goals. Success Metrics: Successful establishment of scalable and efficient data infrastructure. Timely delivery and optimization of data assets and APIs. High attention to detail in the implementation of automated data quality checks. Effective collaboration with cross-functional teams. What We’re Looking For: Proven experience in production data engineering. Expertise in Spark and Scala, with a track record of building data infrastructure using these technologies. Familiarity with data lakes, cloud warehouses, and various storage formats. Strong proficiency in AWS services. Advanced SQL skills for data manipulation and extraction. Exceptional written and verbal communication abilities. Bachelor's degree in Computer Science or a related discipline. Nice to have: Experience with additional data processing frameworks.
Full-time|$200K/yr - $275K/yr|On-site|San Francisco, CA
At Peregrine Technologies, a company backed by top-tier Silicon Valley investors, we empower public safety organizations, state and local governments, federal agencies, and private-sector entities to tackle societal challenges with unparalleled speed and precision. Our cutting-edge AI-enabled platform transforms fragmented and isolated data into actionable operational intelligence, delivering crucial insights that enhance decision-making processes and improve outcomes across various scenarios. Currently, we proudly serve hundreds of clients across over 30 states and two countries, impacting more than 125 million lives, and we are poised for further growth as we expand into the enterprise sector and international markets.TeamWe believe that empathy is key to enhancing our solutions. Our engineering team prioritizes understanding how users interact with our products, which guides us in finding the best solutions. You'll have the chance to collaborate closely with our onsite team to explore the diverse use cases that Peregrine addresses.We value both ownership and teamwork. In this role, you will be responsible for significant features while working alongside fellow engineers to bring them to fruition. We hold humility and empathy in high regard as essential traits for crafting effective solutions, and you will engage directly with our deployment team and users to iterate on problem-solving. Creativity and resilience are vital as we pursue our vision.RoleWe are in search of a Staff Data Infrastructure Engineer to join our dynamic team. In this role, you will take full ownership of the data layer that is foundational to all of Peregrine's operations. You will design and construct the systems responsible for ingesting, storing, and serving vast amounts of real-time operational data, empowering our clients to make critical decisions quickly and confidently.This senior individual contributor position is ideal for someone who excels at tackling complex technical challenges and possesses the experience and judgement necessary to influence key infrastructure decisions. You will engage with a variety of intricate challenges, including:Designing and managing a high-throughput, real-time data integration platform across diverse customer environments.Architecting a scalable open table format layer to ensure reliable data storage at petabyte scale.Building and optimizing distributed data processing pipelines using Apache Spark and related streaming technologies.Enhancing performance, reliability, and cost efficiency across the entire data infrastructure stack.Collaborating with platform and product engineering teams to define data contracts, schemas, and integration pathways.
Full-time|Remote|San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States
Join Mercury as a Senior Infrastructure Engineer, where you will be pivotal in shaping the infrastructure that supports our innovative financial solutions. You will work closely with cross-functional teams to design, implement, and maintain scalable and reliable infrastructure systems. This role is ideal for individuals who thrive in a fast-paced environment and are passionate about leveraging technology to drive business success.
The Bot CompanyAt The Bot Company, we are on a mission to create an innovative robot that enhances everyday life in homes everywhere.Located in the heart of San Francisco, our compact team comprises talented engineers, designers, and operators hailing from esteemed organizations such as Tesla, Cruise, OpenAI, Google, and Pixar. With a track record of delivering exceptional products to hundreds of millions of users, we understand the intricacies involved in crafting remarkable experiences.Our intentionally lean structure fosters swift decision-making while eliminating unnecessary bureaucracy. Each team member operates as an individual contributor, endowed with substantial autonomy, ownership, and accountability. We thrive on a culture of rapid iteration and efficient execution, working collaboratively across the technology stack.
Position OverviewJoin OpenEvidence as a Data Infrastructure Software Engineer, where you will engineer comprehensive systems that drive essential product and research operations. Your focus will be on optimizing performance, ensuring scalability, and enhancing accuracy, while enjoying the autonomy to manage the infrastructure that assists healthcare professionals in navigating complex clinical decisions in real-time.We value exceptional creators who thrive in versatile roles. Our engineers engage across various products and projects, taking ownership wherever they can make the most significant impact.About OpenEvidenceOpenEvidence is the leading medical AI platform globally, utilized by over 40% of clinicians in the U.S. in just over a year through organic product-led growth. As a $12 billion company, our engineering team comprises 30 talented individuals from MIT, Harvard, and Stanford. We believe that groundbreaking products are born from a small group of exceptional builders, driven by focused goals and empowered to take ownership and act swiftly. We are expanding our team to capitalize on an unparalleled opportunity to set the standard for medical AI platforms.If you are a top-tier engineer or scientist eager to push the boundaries and achieve tangible outcomes that affect millions of lives, we want to connect with you.Our CultureWe expect our work to be performed at an elite level. The journey from concept to execution and scaling is akin to a professional sport, where excellence is non-negotiable. We believe that the creation of innovative technologies is only achievable through complete ownership. Significant achievements happen when individuals take the initiative to see them through.Your ProfileThis role is not for those seeking a 9-to-5 job or merely looking to write papers. If you are ready to dive into the trenches, tackle challenges head-on, and create something from scratch that could impact millions and drive substantial revenue, you might be the perfect fit.We seek brilliant builders who are intelligent, ambitious, resourceful, self-reliant, detail-oriented, driven, hardworking, and humble. Does this sound rare? It is, as we have only found 30 of them so far, and we are eager to discover more.
At Hover, we empower individuals to conceptualize, enhance, and safeguard the spaces they cherish. Utilizing proprietary AI and over a decade's worth of real property data, we provide answers to pivotal questions such as, 'What will it look like?' and 'What will it cost?' Our platform offers homeowners, contractors, and insurance professionals accurately measured, interactive 3D models of properties — all achievable from a smartphone scan in mere minutes.Driven by curiosity and purpose, we maintain a strong commitment to our customers, communities, and one another. We believe that diverse perspectives foster the best ideas, and we take pride in nurturing an inclusive, high-performance culture that encourages growth, accountability, and excellence. Supported by premier investors like Google Ventures and Menlo Ventures, and trusted by industry leaders such as Travelers, State Farm, and Nationwide, we are revolutionizing how individuals perceive and interact with their environments.About the RoleAs a Senior Software Engineer specializing in Infrastructure, you will delve into cloud infrastructure challenges unique to a company focused on 3D data, computer vision, and machine learning. Your enthusiasm for building internal tools and your talent for crafting elegant solutions to complex issues will be crucial in this role.Our Infrastructure team is responsible for everything beyond the application binary, serving as a critical partner to the rest of the engineering department. Through automation, we aim to streamline processes, ensuring that the simplest path is also the fastest and most secure. We manage and optimize all cloud infrastructure components including our Kubernetes environment, databases, networks, storage, and caching systems. Collaborating with engineering peers, we establish consistent solutions to common architectural challenges, particularly those involving rich geospatial and machine learning workloads. We are well-versed in best practices for cloud architecture and CI/CD, leveraging application development as a means to implement these practices.Your ContributionsYou will play a pivotal role in developing straightforward solutions to intriguing problems, thereby enhancing the foundation upon which our engineering teams build. Collaborating closely with engineers across the organization, you will help make their applications faster, easier to manage, and more reliable in production. Your work will span frontend, backend, computer vision, data, security, and machine learning teams to scale new ideas into production effectively. Given the small and highly collaborative nature of our team, you can expect a varied and impactful workload, which may include:Designing scalable cloud architectureEnhancing CI/CD pipelines and developer tooling
Full-time|$100.6K/yr - $148K/yr|On-site|San Francisco, CA; New York, NY; Seattle, WA; Phoenix, AZ
Join DoorDash's Go-To-Market Technology (GTMT) team as an Analytics Engineer, where you will harness data to fuel our rapid growth. You will create innovative solutions that streamline business processes and enhance the productivity of our GTM teams. In this role, you will dive deep into data analytics, build scalable tools, and transform insights into actionable strategies that drive business success. Collaborating with cross-functional teams, you will automate workflows, ensure data integrity, and leverage AI capabilities to elevate our data infrastructure.
Join Matter Intelligence as a Data and Machine Learning Infrastructure Engineer, where you will play a pivotal role in shaping the future of data-driven decision-making. You will be part of a dynamic team focused on building and optimizing infrastructure that supports innovative machine learning applications. Your expertise will help us enhance our data pipelines and ensure seamless integration of machine learning models into production.
Join our dynamic team at Bland Inc. as a Senior Infrastructure Engineer, where you will play a critical role in designing and implementing robust infrastructure solutions. You will work alongside a talented group of professionals, using cutting-edge technology to drive innovation and efficiency.
Full-time|Remote|San Francisco, CA or Remote (USA)
Join Fieldguide as a Senior Infrastructure Engineer and be at the forefront of our innovative infrastructure solutions. In this role, you will lead the design, implementation, and maintenance of our infrastructure systems while ensuring optimal performance, security, and scalability. Your expertise will help shape our technology strategy and drive impactful projects.
Full-time|$153K/yr - $376K/yr|Remote|San Francisco, CA • New York, NY • United States
At Figma, we are expanding our team of dedicated creatives and innovators committed to making design accessible for everyone. Our platform empowers teams to transform ideas into reality—whether you're brainstorming, prototyping, converting designs into code, or utilizing AI for enhancements. From concept to product, Figma enables teams to optimize workflows, accelerate processes, and collaborate in real-time from anywhere in the world. If you're passionate about shaping the future of design and teamwork, we invite you to join us!The Data Platform team at Figma is responsible for constructing and managing the essential systems that drive analytics, AI/ML initiatives, and data-informed decision-making across our organization. We cater to a wide array of stakeholders, including AI researchers, machine learning engineers, data scientists, product engineers, and business teams that depend on data for insights and strategic planning. Our team is tasked with owning and scaling critical platforms such as the Snowflake data warehouse, ML Datalake, orchestration and pipeline infrastructure, and extensive data ingestion and processing systems, overseeing all data transactions that occur within these platforms.Despite our small size, we tackle significant, high-impact challenges. In the upcoming years, we are focused on developing the data infrastructure layer for Figma's AI-driven products, enhancing cost and performance efficiencies across our data stack, scaling our ingestion and reverse ETL capabilities for new product applications, and reinforcing data quality, reliability, and compliance at every level. If you are enthusiastic about creating scalable, high-performance data platforms that empower teams across Figma, we would love to connect with you!This is a full-time role that can be performed from one of our US hubs or remotely within the United States.
Full-time|$166K/yr - $225K/yr|On-site|San Francisco, California
P-78 While candidates in the listed locations are encouraged for this role, candidates in other locations will be considered. At Databricks, we are dedicated to empowering data teams to tackle the world's most challenging problems—from realizing the next mode of transportation to advancing medical breakthroughs. We accomplish this by creating and managing the premier data and AI infrastructure platform, enabling customers to leverage deep data insights for business enhancement. Founded by engineers and with a strong customer focus, we eagerly embrace every opportunity to address technical challenges, from crafting cutting-edge UI/UX for data interaction to scaling our services and infrastructure across millions of virtual machines. And this is just the beginning. As a Senior Software Engineer on the Infrastructure teams, you will develop scalable systems that underpin the Databricks platform, positioning it as the go-to solution for executing Big Data and AI workloads. Your role will involve enhancing the Databricks infrastructure platform, encompassing multi-cloud systems and services designed to manage thousands of Kubernetes clusters at scale, storing petabytes of data, providing highly scalable and distributed API gateways, implementing a rate limiting framework, ensuring network security and encryption, and creating developer tools and infrastructure (we utilize Bazel), testing frameworks, and scalable CI/CD systems, among many other responsibilities. The impact you will have: Expand and enhance key components of the core Databricks infrastructure. Design multi-cloud systems and abstractions to enable the Databricks product to operate across existing Cloud providers. Enhance software development workflows to improve engineering and operational efficiency. Utilize our own data and AI platform to analyze build and test logs and metrics, identifying areas for enhancement. Create automated build, test, and release infrastructures. Establish and maintain engineering process standards to support our growth and success.
Full-time|$200K/yr - $275K/yr|On-site|San Francisco
About Watney RoboticsAt Watney Robotics, we are pioneers in developing autonomous robotic solutions aimed at enhancing critical infrastructure. Recently securing $21 million in seed funding from leading investors such as Conviction, Abstract, and A*, we are collaborating with the world’s largest hyperscalers to propel the expansion of data centers and streamline maintenance processes.This is an extraordinary opportunity to join our team at a pivotal stage as we transition from prototype to large-scale production. Be part of a team that not only ships cutting-edge systems but also plays a crucial role in shaping the operational framework of an innovative robotics company.
Be part of our mission to redefine AI by shaping the narrative surrounding document understanding.Role OverviewAt LlamaIndex, our Infrastructure team lays the groundwork for our product and provides essential tools that facilitate the development, deployment, and monitoring of our code. We are tasked with designing, constructing, and scaling the core infrastructure that drives a high-capacity data platform for AI applications. We seek individuals who are passionate about creating supportive systems that enhance our engineering capabilities and contribute to our rapidly expanding product suite.Ideal candidates will have a strong background in cloud infrastructure management, navigating various scalability challenges, and enhancing the productivity of the broader Engineering team. Key traits we value in our culture include a customer-centric mindset, collaboration, diligence, and optimism. We are looking for proactive team players who are eager to help us evolve our culture as we grow.Key ResponsibilitiesCollaborate with engineering teams to develop and maintain foundational systems that empower developers and support our rapid growth.Design and execute scalable infrastructure solutions suitable for various deployment models, including SaaS, single-tenant, and private environments.Oversee and optimize cloud resources and Kubernetes clusters to ensure cost-effectiveness and high performance.Facilitate successful external customer deployments by establishing clear infrastructure guidelines and principles.Enhance the release and deployment processes to improve efficiency and reliability.Ensure compliance with applicable regulations and implement comprehensive security measures across all deployment environments.QualificationsMinimum of 5 years of engineering experience.Experience working on Platform or Infrastructure teams on substantial projects involving infrastructure components like Terraform/CDKTF, Kubernetes, Helm, testing infrastructure, release management, and observability.Proficient in optimizing cloud resource utilization.Skilled in tuning Kubernetes clusters and cloud resources for optimal performance and cost efficiency.Dedicated to cultivating LlamaIndex’s engineering culture as we expand.Ability to balance speed and pragmatism in delivering solutions.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our vision is to enhance human potential by advancing collaborative general intelligence. We are dedicated to creating a future where individuals have the resources and knowledge to harness AI for their specific objectives and aspirations.Our team comprises scientists, engineers, and innovators who have developed some of the most popular AI products, including ChatGPT and Character.ai, as well as influential open-weight models like Mistral, along with highly regarded open-source projects such as PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleWe are seeking a talented engineer to enhance our data infrastructure. You will become part of a dynamic, high-impact team tasked with designing and scaling the foundational infrastructure for distributed training pipelines, multimodal data catalogs, and sophisticated processing systems that manage petabytes of data.Our infrastructure is pivotal; it serves as the foundation for every groundbreaking achievement. You will collaborate directly with researchers to expedite experiments, develop novel datasets, optimize infrastructure efficiency, and derive essential insights from our data repositories.If you are passionate about distributed systems, large-scale data mining, and open-source tools such as Spark, Kafka, Beam, Ray, and Delta Lake, and enjoy building innovative solutions from scratch, we encourage you to apply.Note: This is an evergreen role that we keep open continuously for expressions of interest. We receive a high volume of applications, and while there may not always be an immediate position that aligns perfectly with your skills and experience, we encourage you to apply. We regularly review applications and reach out as new opportunities arise. You are welcome to reapply after gaining more experience, but please refrain from applying more than once every six months. We may also post for specific roles for particular projects or team needs, and in those cases, you are welcome to apply directly in addition to this evergreen role.
About Our TeamAt OpenAI, our Data Platform team is at the heart of our innovative approaches to data management, powering essential product, research, and analytics workflows. We manage some of the largest Spark compute fleets in production, architect data lakes and metadata systems on Iceberg and Delta, and envision exabyte-scale architectures. Our high-throughput streaming platforms utilize Kafka and Flink, while our orchestration is powered by Airflow. We also support machine learning feature engineering tools such as Chronon. Our mission is to provide secure, reliable, and efficient data access at scale, thereby enhancing intelligent, AI-assisted data workflows.Join us in building and maintaining these core platforms that are foundational to OpenAI's products, research, and analytics capabilities.We are not just scaling infrastructure; we are transforming the way people engage with data. Our vision includes intelligent interfaces and AI-powered workflows that make data interactions faster, more reliable, and intuitive.About the PositionIn this role, you will focus on constructing and managing data infrastructure that supports extensive compute fleets and storage systems optimized for high performance and scalability. You will be instrumental in designing, developing, and operating the next generation of data infrastructure at OpenAI. Your responsibilities will encompass scaling and securing big data compute and storage platforms, building and maintaining high-throughput streaming systems, ensuring low-latency data ingestion, and facilitating secure, governed data access for machine learning and analytics. You will also prioritize reliability and performance at extreme scales.You will have complete ownership of the full lifecycle: from architecture to implementation, production operations, and on-call responsibilities.You should be experienced with platforms such as Spark, Kafka, Flink, Airflow, Trino, or Iceberg. Familiarity with infrastructure tools like Terraform, along with expertise in debugging large-scale distributed systems, is essential. A passion for addressing data infrastructure challenges in the AI domain is a must.This role is based in San Francisco, CA. We offer a hybrid work model requiring 3 days in the office each week and provide relocation assistance for new hires.Responsibilities:Design, build, and maintain data infrastructure systems including distributed compute, data orchestration, distributed storage, streaming infrastructure, and machine learning infrastructure, ensuring they are scalable, reliable, and secure.Ensure our data platform can scale significantly while maintaining reliability and efficiency.Enhance company productivity by empowering your fellow engineers and teammates through innovative data solutions.
Foxglove develops data infrastructure for robotics teams operating in real-world environments such as factories and warehouses. As robots leave the lab, engineers need reliable tools for analyzing data, diagnosing issues, and improving system performance. Foxglove delivers observability, visualization, and data management solutions designed to help teams manage large volumes of multimodal sensor data from deployed fleets. Role overview This Software Engineer - Robotics Data Infrastructure position centers on building and optimizing the systems behind Foxglove’s products. The scope covers desktop and web visualization tools, backend services for data ingestion and streaming, and client libraries running directly on robots. Work ranges from enhancing decoding performance in Rust, to extending MCAP tooling in C++, integrating new data sources with TypeScript, and occasionally working with customers to resolve performance issues. What you will do Design, build, and deploy product features from start to finish, incorporating feedback from users. Work across the stack: from Rust and C++ libraries on devices, to backend cloud services, to browser-based visualization tools. Identify and address performance bottlenecks in data pipelines, including ingestion, decoding, streaming, and rendering. Contribute to MCAP and other open-source libraries used by the robotics community. Collaborate with customers and robotics engineers to gather requirements and validate new solutions. Maintain high engineering standards and help foster a culture of ownership within the team. Design systems for efficient storage and querying of petabyte-scale robotics data. Requirements At least 5 years of experience developing production software. Strong proficiency in Rust, C++, and TypeScript, with a willingness to learn new languages or frameworks as needed. Location This position is based in San Francisco, CA.