Technical Staff Member Distributed Systems jobs in San Francisco – Browse 2,031 openings on RoboApply Jobs

Technical Staff Member Distributed Systems jobs in San Francisco

Open roles matching “Technical Staff Member Distributed Systems” with location signals for San Francisco. 2,031 active listings on RoboApply Jobs.

2,031 jobs found

1 - 20 of 2,031 Jobs
Apply
Gimlet Labs logoGimlet Labs logo
Full-time|On-site|San Francisco

At Gimlet Labs, we are pioneering the first heterogeneous neocloud tailored for AI workloads. As AI technology evolves, the industry confronts critical limitations in power, capacity, and cost linked to the traditional homogeneous, vertically integrated infrastructure. Gimlet addresses these challenges by decoupling AI workloads from the fundamental hardware…

Mar 10, 2026
Apply
Liquid AI logo
Full-time|On-site|San Francisco

About Liquid AIOriginating from MIT CSAIL, Liquid AI specializes in the development of general-purpose AI systems designed to operate seamlessly across various platforms, including data center accelerators and on-device hardware. Our focus is on delivering low latency, efficient memory usage, privacy, and reliability. We collaborate with organizations in diverse sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we seek outstanding talent to join our mission.The OpportunityThe Training Infrastructure team is at the forefront of building the distributed systems that empower our next-generation Liquid Foundation Models. As our operations expand, we aim to innovate, implement, and enhance the infrastructure crucial for large-scale training.This role is centered around high ownership of training systems, emphasizing runtime, performance, and reliability rather than a typical platform or SRE function. You will collaborate within a small, agile team, creating vital systems from the ground up instead of working with pre-existing infrastructure.While San Francisco and Boston are preferred, we are open to other locations.What We're Looking ForWe are seeking an individual who:Embraces the complexity of distributed systems: Our team is dedicated to maintaining stability during extensive training runs, troubleshooting training failures across GPU clusters, and enhancing overall performance.Is passionate about building: We value team members who take pride in developing robust, efficient, and reliable infrastructure.Excels in uncertain environments: Our systems are designed to support evolving model architectures. You will be making decisions based on incomplete information and rapidly iterating.Aligns with team goals and delivers results: The best engineers on our team align with collective priorities while providing data-driven feedback when challenges arise.The WorkDesign and develop core systems that ensure quick and reliable large training runs.Create scalable distributed training infrastructure for GPU clusters.Implement and refine parallelism and sharding strategies for evolving architectures.Optimize distributed efficiency through topology-aware collectives, communication/compute overlap, and straggler mitigation.Develop data loading systems to eliminate I/O bottlenecks for multimodal datasets.

Jul 29, 2025
Apply
TierZero logoTierZero logo
Full-time|Hybrid|SF HQ

TierZero builds tools that help engineering teams deliver and manage code efficiently. The platform enables quicker incident response, clearer operational visibility, and shared knowledge among engineers. Backed by $7 million from investors like Accel and SV Angel, TierZero supports clients such as Discord, Drata, and Framer as they strengthen infrastructure for AI-driven work. This in-person role is based at TierZero's San Francisco headquarters, with a hybrid schedule requiring three days onsite each week. As a founding member of the technical staff, work directly with the CEO, CTO, and customers to influence the direction of TierZero’s core products and systems. The position calls for flexibility as priorities shift and close collaboration across the company. What you will do Design and develop AI systems that handle large volumes of unstructured data. Build full-stack product features, informed by direct feedback from users. Enhance the product so agents are intelligent, reliable, and easy for engineers to use. Create systems to automatically evaluate outputs from large language models and improve agentic reasoning through self-play and feedback. Construct machine learning pipelines, including data ingestion, feature creation, embedding stores, retrieval-augmented generation (RAG) pipelines, vector search, and graph databases. Experiment with open-source and emerging large language models to compare different approaches. Develop scalable infrastructure for long-running, multi-step agents, including memory, state management, and asynchronous workflows. Requirements Interest in working with large language models, managed cloud platforms, cloud infrastructure, and observability tools. At least 5 years of professional experience or significant open-source contributions. Comfort with shifting priorities and tackling new technical problems. Strong product focus and commitment to customer outcomes. Openness to learning from a team with a track record of delivering over $10 billion in value. Ability to work onsite in San Francisco three days per week. Bonus: Experience in a startup setting and familiarity with startup dynamics.

Apr 24, 2026
Apply
Catalog logoCatalog logo
Full-time|On-site|San Francisco

At Catalog, we are pioneering the commerce infrastructure for AI—creating the essential framework that enables digital agents to not only explore the web but also comprehend, analyze, and engage with products. Our innovations drive the future of AI-driven shopping experiences, fundamentally transforming how consumers discover and purchase items online.Role OverviewAs a Technical Staff Member, you will be instrumental in developing core systems, shaping our engineering culture, and transitioning our vision from prototype to a robust platform. This role requires full-stack expertise and a commitment to owning and resolving challenges from start to finish.Who You AreYou have experience creating beloved and trusted products from the ground up.You combine technical proficiency with a keen product sense and data-driven intuition.You are well-versed in AI technologies.You prioritize speed, write clean code, and ensure thorough instrumentation.You seek a high level of ownership within a small, talent-rich team based in San Francisco.Challenges You Will TackleDevelop and deploy agentic-search APIs that deliver structured and real-time product data in milliseconds.Build checkout systems enabling agents to conduct transactions with any merchant.Create an embeddings and retrieval layer that optimizes recall, precision, and cost efficiency.Establish a product graph and ranking pipeline that adapts based on actual user outcomes.Preferred QualificationsProven experience shipping data-centric products in a live environment.Experience with recommendation systems or information retrieval methodologies.Familiarity with API development, search indexing, and data pipeline construction.Our Work CultureWe operate with a small, high-trust, and highly motivated team, fostering an environment of in-person collaboration in North Beach, San Francisco. Our process involves debate, decision-making, and execution.If your profile aligns with our needs, we will contact you to arrange 2-3 brief technical interviews, followed by an onsite meeting in our office where you will collaborate on a small project, exchange ideas, and meet the team.

Oct 15, 2025
Apply
Adyen logoAdyen logo
Full-time|On-site|San Francisco

Join our dynamic team at Adyen as a Technical Staff Member in San Francisco! We are seeking innovative minds passionate about technology and problem-solving. In this role, you will collaborate with cross-functional teams to craft solutions that enhance our services and improve customer experiences.

Mar 6, 2026
Apply
tierzero logotierzero logo
Full-time|Hybrid|SF HQ

About TierZero TierZero helps engineering teams use AI to build and ship code more efficiently. The platform targets the bottleneck of human speed in production, giving teams tools for faster incident response, better operational visibility, and shared knowledge. TierZero is backed by $7M in funding from investors including Accel and SV Angel. Companies like Discord, Drata, and Framer trust TierZero to strengthen their infrastructure for AI-driven engineering. Role Overview: Founding Member of Technical Staff This is an on-site role based at TierZero’s San Francisco headquarters, with three days a week in the office. As a founding member, direct collaboration with the CEO, CTO, and early customers shapes the direction of both product and systems. The work spans hands-on development and close engagement with users and leadership. What You Will Do Design and build intelligent AI systems to analyze large volumes of unstructured data. Deliver full-stack features based on real user feedback. Improve the product experience so AI agents are both reliable and easy for engineers to use. Develop systems that automatically evaluate LLM outputs and advance agentic reasoning using self-play and feedback loops. Create machine learning pipelines, including data ingestion, feature generation, embedding stores, retrieval-augmented generation (RAG), vector search, and graph databases. Prototype with open-source and new LLMs, comparing their strengths and weaknesses. Build scalable infrastructure for long-running, multi-step agents, with attention to memory, state, and asynchronous workflows. What We Look For Over five years of relevant professional or open-source experience. Comfort working in environments with uncertainty and evolving challenges. Strong product focus and a drive for customer satisfaction. Interest in large language models (LLMs), Model Control Planes (MCPs), cloud infrastructure, and observability tools. Previous startup experience is a plus. Location This position is based in San Francisco. Expect to work on-site three days per week at TierZero’s HQ.

Apr 15, 2026
Apply
tierzero logotierzero logo
Full-time|Hybrid|SF HQ

Join Us If You:Are eager to learn from a group of experienced engineers who have successfully delivered over $10 billion in value.Prefer to work in our San Francisco office three days a week.Excel in navigating uncertainty.Possess a product-oriented mindset with a strong emphasis on customer satisfaction.Are passionate about working with Large Language Models (LLMs), Multi-Cloud Platforms (MCPs), Cloud Infrastructure, and Observability tools.Bring at least five years of professional or open-source experience.Bonus: Have previous experience in a startup environment and understand the dynamics involved.About TierZeroAt TierZero, we are redefining how engineering teams leverage AI to enhance the speed and efficiency of code deployment. While AI accelerates the development cycle, the actual process of productionizing code remains a challenge. Our platform empowers agile engineering teams to manage code in production effectively, ensuring quicker incident response times, comprehensive operational visibility, and shared knowledge among all team members.Backed by $7 million in funding from leading investors like Accel and SV Angel, TierZero is trusted by industry leaders such as Discord, Drata, and Framer to operate their high-scale systems and create the foundational layer for AI-driven engineering teams.The RoleAs a founding member of our team, you will play a crucial role in conceptualizing and developing our core product and systems from the ground up. Collaborating closely with the CEO, CTO, and our valued customers, you will be engaged in a variety of dynamic projects, including:Designing and implementing intelligent AI systems capable of analyzing extensive unstructured data.Delivering full-stack features informed by direct user feedback.Enhancing the product experience to ensure agents are not only intelligent but also user-friendly and reliable for engineers.Creating systems that autonomously assess LLM outputs, enhancing agent reasoning through iterative self-play and feedback mechanisms.Developing machine learning pipelines encompassing data ingestion, feature generation, embedding stores, retrieval-augmented generation (RAG) pipelines, vector search infrastructure, and graph databases.Investigating and prototyping with open-source and cutting-edge LLMs to assess their capabilities and trade-offs.Establishing scalable infrastructure to support long-running, multi-step agents, addressing aspects like memory management, state handling, and asynchronous workflows.

Apr 30, 2026
Apply
tierzero logotierzero logo
Full-time|On-site|SF HQ

tierzero is looking for a Founding Member of Technical Staff to help shape the direction of its technology from the ground up. This role is based at the company's San Francisco headquarters. Role overview As an early technical hire, you will work closely with engineers and product managers to build new products and features. The work centers on designing, coding, and delivering software solutions that address client needs and support tierzero's growth. Impact Contributions in this role will directly influence the company's future. The team values initiative and hands-on problem solving, giving each member a chance to make a visible difference in how the company evolves. Collaboration This position involves regular collaboration with a small, focused team. Input and ideas from every member help guide product direction and technical decisions.

Apr 29, 2026
Apply
Gimlet Labs logoGimlet Labs logo
Full-time|On-site|San Francisco

At Gimlet Labs, we are pioneering the development of the first heterogeneous neocloud designed specifically for AI workloads. As the demand for AI systems surges, traditional homogeneous infrastructures face critical limits in power, capacity, and cost. Our innovative platform effectively decouples AI workloads from their hardware foundations, intelligently partitioning tasks and orchestrating them to the most suitable hardware for optimal performance and efficiency. This strategy fosters heterogeneous systems that span multiple vendors and generations, including cutting-edge accelerators, enabling significant enhancements in performance and cost-effectiveness at scale.In addition to this foundational work, Gimlet is establishing a robust neocloud for agentic workloads. Our clients benefit from deploying and managing their workloads via stable, production-ready APIs, without the need to navigate hardware selection or performance optimization intricacies.We collaborate with foundation labs, hyperscalers, and AI-native companies to drive real production workloads capable of scaling to gigawatt-class AI datacenters.We are currently seeking a Member of Technical Staff specializing in ML systems and inference. In this pivotal role, you will be responsible for designing and constructing inference systems that facilitate the execution of complete models in real production environments. You will operate at the intersection of model architecture and system performance to ensure that inference processes are swift, predictable, and scalable.This position is perfect for engineers with a deep understanding of modern model execution and a passion for optimizing latency, throughput, and memory utilization across the entire inference lifecycle.

Mar 10, 2026
Apply
tierzero logotierzero logo
Full-time|On-site|SF HQ

tierzero seeks a Founding Member of Technical Staff to play a key role in building the company’s technology from the earliest stages. This position is based at the San Francisco headquarters and offers the chance to collaborate directly with founders and engineers. Role overview As an early team member, you will help design and develop new products and systems. The work involves close collaboration with others in the office, shaping both the technical direction and the culture of the engineering team. What you will do Develop core technology in partnership with founders and engineers Contribute ideas and code that guide the evolution of tierzero’s products Help define engineering standards and establish best practices Location This position is based onsite at the San Francisco HQ.

Apr 27, 2026
Apply
Magic.dev logo
Full-time|On-site|San Francisco

At Magic, we are driven by our mission to develop safe Artificial General Intelligence (AGI) that propels humanity forward in addressing the most critical challenges. We firmly believe that the future of safe AGI lies in automating research and code generation, allowing us to enhance models and tackle alignment issues more effectively than humans alone can manage. Our innovative approach combines cutting-edge pre-training, domain-specific reinforcement learning (RL), ultra-long context, and efficient inference-time computation to realize this vision.Position OverviewAs a Software Engineer within the Inference & RL Systems team, you will play a pivotal role in designing and managing the distributed systems that enable our models to function seamlessly in production, supporting extensive post-training workflows.This position operates at the intersection of model execution and distributed infrastructure, focusing on systems that influence inference latency, throughput, stability, and the reliability of RL and post-training training loops.Our long-context models impose significant execution demands, including KV-cache scaling, managing memory constraints for lengthy sequences, batching strategies, long-horizon trajectory rollouts, and ensuring consistent throughput under real-world workloads. You will be responsible for the infrastructure that ensures both production inference and large-scale RL iterations are efficient and dependable.Key ResponsibilitiesCraft and scale high-performance inference serving systems.Optimize KV-cache management, batching methods, and scheduling processes.Enhance throughput and latency for long-context tasks.Develop and sustain distributed RL and post-training infrastructure.Boost reliability across rollout, evaluation, and reward pipelines.Automate fault detection and recovery mechanisms for serving and RL systems.Analyze and eliminate performance bottlenecks across GPU, networking, and storage components.Collaborate with Kernel and Research teams to ensure alignment between execution systems and model architecture.QualificationsSolid foundation in software engineering and distributed systems.Proven experience in building or managing large-scale inference or training systems.In-depth understanding of GPU execution constraints and memory trade-offs.Experience troubleshooting performance issues in production machine learning systems.Capability to analyze system-level trade-offs between latency, throughput, and cost.

Feb 28, 2026
Apply
Listen Labs logoListen Labs logo
Full-time|On-site|San Francisco, CA

Overview: Due to the increasing market demand and a robust six-month product roadmap, Listen Labs is expanding its engineering team. We seek a technically adept individual (our team includes three IOI medalists) who is eager to contribute to a product that is revolutionizing corporate decision-making. If you are passionate about solving intricate problems from start to finish, we invite you to connect with us.About Listen LabsListen Labs is an innovative AI-driven research platform that empowers teams to swiftly extract insights from customer interviews in hours rather than months. Our technology enables clients to analyze conversations, identify recurring themes, and expedite informed product decisions.Company Highlights:Exceptional Team: Composed of seasoned entrepreneurs (with prior AI exits), co-founders, and experts from leading firms such as Jane Street, Twitter, Stripe, Affirm, Bain, Goldman Sachs, and more, our team is built on a foundation of excellence.Rapid Growth: We are a dynamic team of 40, supported by Sequoia, achieving a remarkable growth trajectory from $0 to $14 million run-rate in less than a year. We prioritize speed, craftsmanship, and collaboration with individuals who embrace ownership.Impressive Traction: We have seen rapid growth across various sectors, securing enterprise clients such as Google, Microsoft, Nestlé, and P&G.Outstanding Performance: Our industry-leading win rate is a direct result of our uniquely differentiated product.Market Validation: We consistently attract customers across every segment, often landing six-figure deals that lead to quick expansions.Viral Product: Our interviews are shared with tens of thousands of viewers, driving product-led growth, organic expansion, and daily inquiries from Fortune 500 companies.Technical Challenges:Research Agent Development: Unlike traditional software purchases, hiring McKinsey involves gaining insights and execution expertise. We are building Listen Labs with that mindset — an AI agent that understands our platform and best research practices, assisting users in project setup, interview execution, and response analysis.Human Database Creation: A core value proposition is our capability to connect users with specific demographics. We are developing a database of millions of individuals, continually enhancing our understanding of user needs as they engage with Listen Labs.

Feb 25, 2026
Apply
Reka logoReka logo
Full-time|Remote|US, UK, Remote

As a Technical Staff Member specializing in Machine Learning, you will:Engage in the complete development lifecycle of innovative large-scale deep learning models.Curate datasets, architect solutions, implement algorithms, and train and assess models to enhance our offerings.Work collaboratively with engineers and researchers to convert groundbreaking research into real-world applications.Join us at a pivotal time, take on diverse roles, and contribute to building transformative products from the ground up!

Aug 1, 2023
Apply
Composio logoComposio logo
Full-time|On-site|sf

At Composio, we are developing advanced infrastructure that enables agents to seamlessly interact with essential work tools such as GitHub, Gmail, Notion, Salesforce, and more. Our dedicated team of engineers is committed to tackling challenges ranging from contextual understanding to search functionalities, ensuring we provide an exceptional bridge between your agents and their tools.Having secured a $25M Series A funding from Lightspeed, alongside prominent angel investors like Guillermo Rauch (CEO of Vercel), Dharmesh Shah (CTO of HubSpot), and Gokul Rajaram, we have experienced remarkable growth, tripling our ARR at the start of this year. Our clientele includes notable names from Y Combinator cohorts to Wabi, Glean, Zoom, and beyond.Your RoleEnhance the experience of teams utilizing our platform by refining our core APIs and SDK.Create intuitive interfaces for both frontend and SDK applications.Take ownership of product development from concept through to production.Collaborate closely with customers to cultivate their loyalty while enhancing the product.Craft clear and concise documentation.

Feb 10, 2026
Apply
Databricks logoDatabricks logo
Full-time|$192K/yr - $260K/yr|On-site|San Francisco, California

P-186 At Databricks, we are passionate about empowering data teams to tackle some of the world’s most challenging problems, from security threat detection to cancer drug development. Our mission is to build and operate the leading data and AI infrastructure platform, enabling our customers to concentrate on the high-value challenges that are integral to their own objectives. Founded in 2013 by the original creators of Apache Spark™, Databricks has rapidly evolved from a small office in Berkeley, California, to a global powerhouse with over 1000 employees. Trusted by thousands of organizations, from startups to Fortune 100 companies, we are recognized as one of the fastest-growing SaaS companies worldwide. Our engineering teams create highly sophisticated products that address significant needs in the industry. We continuously push the limits of data and AI technology while maintaining the resilience, security, and scalability essential for our customers' success on our platform. We manage one of the largest-scale software platforms, consisting of millions of virtual machines that generate terabytes of logs and process exabytes of data daily. At this scale, we frequently encounter cloud hardware, network, and operating system faults, and our software must effectively shield our customers from these challenges. Modern data analysis leverages advanced techniques, such as machine learning, that far exceed the capabilities of traditional SQL query engines. As a Software Engineer on the Runtime team at Databricks, you will be instrumental in developing the next generation of distributed data storage and processing systems that outshine specialized SQL query engines in relational query performance, while providing the flexibility and programming abstractions to support a variety of workloads, from ETL to data science. Examples of projects you may work on include: Apache Spark™: Contributing to the de facto open-source framework for big data. Data Plane Storage: Developing reliable, high-performance services and client libraries for storing and accessing vast amounts of data on cloud storage backends like AWS S3 and Azure Blob Store. Delta Lake: A storage management system that merges the scalability and cost-effectiveness of data lakes with the performance and reliability of data warehouses, featuring low latency streaming. Its higher-level abstractions and guarantees, including ACID transactions and time travel, significantly reduce the complexity of real-world data engineering architectures. Delta Pipelines: Aiming to simplify the management of data engineering pipelines.

Jan 30, 2026
Apply
Listen Labs logoListen Labs logo
Full-time|On-site|San Francisco, CA

Overview: Join Listen Labs as we respond to a surge in market demand with an ambitious 6-month product roadmap. We are expanding our engineering team and are on the lookout for a highly skilled technical expert (our current team includes three IOI medalists) who is eager to build a transformative product that reshapes decision-making for businesses. If you have a passion for solving intricate problems from start to finish, we want to connect with you.About Listen LabsListen Labs is an AI-driven research platform designed to help teams quickly extract insights from customer interviews in a matter of hours rather than months. We empower our clients by enabling them to analyze conversations, identify key themes, and make faster, more informed product decisions.Why Work with Us?Exceptional Team: Founded by seasoned entrepreneurs with a successful AI exit, along with talent from renowned companies such as Jane Street, Twitter, Stripe, Affirm, Bain, and Goldman Sachs, our team boasts impressive credentials including IOI and ICPC backgrounds.Rapid Growth: As a 40-person team backed by Sequoia Capital, we have achieved a remarkable growth trajectory, scaling from $0 to a $14 million run-rate in less than a year. We prioritize craftsmanship and thrive on collaboration with individuals who take ownership.Impressive Traction: We are experiencing rapid growth across various sectors, securing enterprise clients such as Google, Microsoft, Nestlé, and Procter & Gamble.Proven Performance: We maintain an industry-leading win rate driven by our uniquely differentiated product.Market Validation: We consistently attract customers from diverse segments, achieving six-figure contracts that facilitate quick expansions.Viral Product: Our interviews reach tens of thousands of viewers, promoting product-led growth, organic expansion, and daily interest from Fortune 500 companies.Technical Challenges Await:Research Agent Development:Unlike traditional software purchases, hiring McKinsey offers valuable opinions, expertise, and execution. We aim to provide users with an AI agent that possesses complete knowledge about our platform and best research practices, assisting them in project setup, interview conduction, and response analysis.Human Database Creation:One of our core offerings is the ability to identify target users effectively (e.g., "power users of ChatGPT and Excel"). We are in the process of building a comprehensive database that connects users with the insights they need.

Feb 25, 2026
Apply
Mirendil logoMirendil logo
Full-time|Remote|San Francisco

Join the team at Mirendil as a Member of Technical Staff specializing in Machine Learning Systems. In this role, you will leverage your expertise to develop innovative solutions that enhance our ML frameworks and contribute to groundbreaking projects in the AI space. Collaborate with top talent in a dynamic environment that promotes creativity and technical excellence.

Apr 2, 2026
Apply
TierZero logoTierZero logo
Full-time|Hybrid|SF HQ

Are you ready to take a leap into innovation?Join a team of expert engineers who have collectively contributed over $10 billion in value.Be present in our San Francisco office three days a week, collaborating closely with your peers.Flourish in a dynamic environment where adaptability is key.Adopt a product-driven and customer-centric mindset.Engage with cutting-edge technologies including LLMs, MCPs, Cloud Infrastructures, and Observability tools.Bring over five years of professional experience or open-source contributions to the table.Bonus points if you've previously thrived in a startup environment.About TierZeroAt TierZero, we are transforming the landscape of software engineering with AI. Our mission is to enhance the speed at which engineering teams build and deploy code, addressing the bottlenecks that slow down production. With $7 million raised from esteemed investors like Accel and SV Angel, our solutions are trusted by leading companies such as Discord, Drata, and Framer to optimize their high-scale systems and infrastructure for the AI-driven future.Your RoleAs a founding member, you will play a pivotal role in creating and developing our core products and systems. Collaborating closely with our CEO, CTO, and our valued customers, you will be engaged in a variety of tasks, including:Designing and implementing intelligent AI systems capable of reasoning over vast amounts of unstructured data.Deploying full-stack features based on direct user feedback.Enhancing the product experience to ensure that our AI agents are not only intelligent but also reliable and user-friendly for engineers.Building systems that automatically assess LLM outputs, enhancing reasoning through self-play and feedback loops.Developing machine learning pipelines for data ingestion, feature generation, embedding storage, RAG pipelines, vector search infrastructure, and graph databases.Experimenting with open-source and frontier LLMs to assess various trade-offs.Creating scalable infrastructure to support long-running, multi-step agents, including memory, state management, and asynchronous workflows.

May 1, 2026
Apply
Ambience Healthcare logoAmbience Healthcare logo
Full-time|$250K/yr - $300K/yr|Hybrid|San Francisco

About Us:At Ambience Healthcare, we are not just another scribe; we are pioneering an AI intelligence platform that reintegrates humanity into healthcare, delivering significant ROI for health systems nationwide.Our innovative technology empowers providers to concentrate on delivering exceptional care by alleviating the administrative burdens that distract them from their patients and essential duties. Ambience offers real-time, coding-aware documentation and clinical workflow support across various healthcare settings at the leading health systems in North America.Our teams operate with unwavering dedication and extreme ownership to develop optimal solutions for our healthcare partners. We cherish transparency, positivity, and deep contemplation, holding each other to high standards because we recognize that the challenges we tackle are of utmost importance.Recognized as the leader in enhancing clinician experience by KLAS Research in their Emerging Solutions Top 20 Report, honored by Fast Company as one of the Next Big Things in Tech, acknowledged by Inc. as one of the best AI companies in healthcare, and selected as a LinkedIn Top Startup in 2024 and 2025. We're proudly supported by Oak HC/FT, Andreessen Horowitz (a16z), OpenAI Startup Fund, and Kleiner Perkins — and we're just beginning our journey.The Role:Ambience is responsible for processing millions of patient encounters across the largest health systems in the country. These organizations rely on us for real-time clinical workflows where latency and reliability significantly influence patient care. A delay during a patient visit is not merely a negative metric; it can lead to a physician abandoning the tool.In this position, you will oversee the core systems that enable Ambience to scale with reliability: database architecture, caching, multi-tenancy, and performance optimization that influences the user experience for clinicians. You will design database architectures that accommodate our growth, construct caching systems that prevent EHR API latency from affecting critical processes, and develop multi-tenant infrastructures that protect customer data while enhancing performance.Your ultimate goal will be to create infrastructure that other teams rely on effortlessly.Our engineering roles are hybrid, requiring presence in our San Francisco office three times a week.

Feb 2, 2026
Apply
Liquid AI logo
Full-time|On-site|San Francisco

About Liquid AIOriginating from the prestigious MIT CSAIL, Liquid AI crafts cutting-edge, general-purpose AI systems designed for optimal efficiency across a variety of platforms, from data center accelerators to edge devices. Our solutions prioritize low latency, minimal memory requirements, privacy, and reliability. We collaborate with industry leaders in consumer electronics, automotive, life sciences, and financial services, and as we expand rapidly, we are looking for exceptional talent to join our journey.The OpportunityJoin us at the exciting crossroads of advanced foundation models and the open-source community. In this pivotal role, you will oversee developer relations and community engagement, influencing how our models are adopted, documented, and integrated throughout the AI ecosystem. This unique position allows you to balance impactful community work with essential technical contributions, giving you the chance to shape how our models are represented and utilized by developers worldwide. If you are passionate about excellent documentation, enhancing developer experience, and democratizing access to powerful AI models, this is your chance to influence the future of open-source AI.What We're Looking ForWe seek a proactive individual who:Takes ownership: Manages open-source partnerships from initial outreach to ongoing collaboration.Thinks community-first: Integrates documentation, tutorials, integrations, and support into a seamless developer experience.Is pragmatic: Focuses on developer adoption and partner success rather than superficial metrics.Communicates clearly: Bridges the gap between technical teams and external partners, representing Liquid's interests while fostering genuine relationships.The WorkServe as the primary liaison for open-source partners.Assist in model releases with both marketing and technical content.Create tutorials, articles, and guides on training and utilizing our foundation models.Enhance and maintain LFM documentation for clarity and thoroughness.Collect community feedback and communicate insights to internal teams.

Feb 4, 2026

Sign in to browse more jobs

Create account — see all 2,031 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.