Trust Safety Program Engineer jobs in San Francisco – Browse 5,547 openings on RoboApply Jobs

Trust Safety Program Engineer jobs in San Francisco

Open roles matching “Trust Safety Program Engineer” with location signals for San Francisco. 5,547 active listings on RoboApply Jobs.

5,547 jobs found

1 - 20 of 5,547 Jobs
Apply
Tools for Humanity logo
Full-time|$217K/yr - $260K/yr|On-site|San Francisco

About the Role: As a Trust & Safety Program Engineer at Tools for Humanity, you will play a crucial role in ensuring the safety and integrity of our platform. You will work collaboratively with cross-functional teams to develop and implement strategies that protect our users and foster a secure environment. Your expertise will contribute to the evolution of …

Apr 10, 2026
Apply
suno logo
Full-time|On-site|San Francisco

Join suno as an Engineering Manager in our Trust & Safety team, where you will lead the development and implementation of innovative solutions to enhance user safety and trust on our platform. You will work closely with cross-functional teams to ensure the integrity of our systems and the protection of our users. Your leadership will be vital in driving engineering excellence and fostering a culture of safety and accountability.

Mar 7, 2026
Apply
Quizlet Inc. logo
Full-time|On-site|San Francisco, CA

Join Quizlet as a Senior Software Engineer specializing in Trust & Safety, where you will play a crucial role in enhancing the security and integrity of our platform. You will collaborate with cross-functional teams to develop robust software solutions that protect our community and ensure a safe learning environment.

Mar 25, 2026
Apply
OpenAI logo
Full-time|Hybrid|San Francisco

Join Our Dynamic TeamAt OpenAI, our Trust, Safety & Risk Operations teams are dedicated to protecting our innovative products, users, and the organization from various threats, including abuse, fraud, scams, and regulatory challenges. We operate at the nexus of operations, compliance, user trust, and safety, collaborating closely with Legal, Policy, Engineering, Product, Go-To-Market, and external partners to ensure our platforms are secure, compliant, and reliable for a diverse, global audience.Our team supports users across ChatGPT, our API, enterprise solutions, and developer tools. We handle sensitive inbound inquiries, develop detection and enforcement systems, and scale operational workflows to address the demands of a fast-paced, high-stakes environment.Your Role and ResponsibilitiesWe are looking for seasoned analysts with expertise in one or more of the following domains:Content Integrity & Scaled Enforcement – Proactively identify, review, and respond to policy violations, harmful content, and emerging abuse trends on a large scale.Emerging Risk Operations – Detect, assess, and mitigate new and intricate safety, policy, or integrity challenges in the rapidly changing AI landscape.In this role, you will manage high-sensitivity workflows, serve as the incident manager for complex cases, and develop scalable operational systems, including tools, automation, and vendor processes that uphold user safety and trust while fulfilling our legal, ethical, and product commitments.Our work culture embraces a hybrid model of three days in the San Francisco office each week, and we provide relocation assistance for new hires.Please be advised that this role may involve exposure to sensitive content, including material that may be sexual, violent, or otherwise unsettling.Your Key Responsibilities Include:Manage and resolve high-priority cases within your area of expertise (content enforcement, fraud/scams, compliance, or emerging risks).Conduct thorough risk assessments and investigations utilizing internal tools, product signals, and external data sources.Act as the incident manager for escalated cases necessitating intricate policy, legal, or regulatory analysis.Collaborate with cross-functional teams to design and implement top-tier operational workflows, decision trees, and automation strategies.Establish feedback loops and continuous improvement initiatives to enhance operational effectiveness.

Aug 14, 2025
Apply
Lyft, Inc. logo
Full-time|On-site|San Francisco, CA

Join Lyft as a Manager of Trust & Safety Policy, where you will play a crucial role in shaping and implementing policies that ensure the safety and trust of our community. Your leadership will guide strategic initiatives, engage with stakeholders, and drive data-informed decisions to foster a secure environment for our riders and drivers.

Mar 23, 2026
Apply
Faire logo
Full-time|On-site|San Francisco, CA

As the Trust and Safety Strategy Lead at Faire, you will play a pivotal role in shaping our approach to ensuring the security and trust within our marketplace. You will be responsible for developing and executing strategic initiatives that promote a safe environment for our users, driving policy development, and collaborating with various teams to implement safety measures and risk management protocols.This position is ideal for a strategic thinker with a strong background in trust and safety, who thrives in a fast-paced, innovative environment.

Apr 8, 2026
Apply
Chime logo
Full-time|On-site|San Francisco, CA, USA

Chime is hiring a Product Manager focused on Trust & Safety in San Francisco. This role centers on protecting the platform and its users by driving initiatives that strengthen safety and reduce fraud. Role overview The Product Manager will work with teams across the company to design and launch strategies that address user safety concerns. Efforts will target the identification and prevention of fraudulent activities, ensuring that Chime remains a secure place for members. Key responsibilities Develop and implement product strategies to enhance trust and safety Collaborate with engineering, operations, and other teams to address risks and improve user security Shape product direction with a focus on maintaining a trustworthy platform Impact Your work will directly influence how Chime protects its community, helping to build a safer experience for all users.

Apr 29, 2026
Apply
OpenAI logo
Full-time|On-site|San Francisco

About Our TeamAt OpenAI, our User Safety & Risk Operations team is dedicated to protecting our platform and users from various forms of abuse, fraud, and emerging threats. We operate at the crucial intersection of product risk, operational scale, and real-time safety response, supporting a diverse range of users from individuals to global enterprises, as well as advertisers and creators.The Ads Trust & Safety Operations team is committed to ensuring the safety of our users, advertisers, and creators across all monetized surfaces. As OpenAI rolls out new revenue-generating formats and partnerships, this team guarantees that these experiences are safe, compliant, of high quality, and aligned with our overarching safety standards. We work closely with Product, Engineering, Policy, and Legal teams to identify potential risks, develop and enhance enforcement systems, and ensure scalable, high-integrity operations.About the RoleWe are seeking a seasoned operator to help expand and enhance the Ads Trust & Safety Operations at OpenAI. In this pivotal role, you will oversee critical Ads T&S workstreams from inception to execution, collaborating closely with Product, Policy, Engineering, Legal, and Operations teams to design scalable enforcement processes, strengthen detection mechanisms, and ensure safe support for Ads and monetization at scale.You will navigate the intersection of strategy and execution—translating ambiguity into structured programs, identifying operational risks, and driving measurable improvements across systems and workflows.This position requires an individual who is highly operational, excels at execution, and is comfortable providing clarity in uncertain situations. You should be enthusiastic about building scalable systems and processes from the ground up and working in tandem with policy and product teams as we rapidly iterate on advertising strategies and features.Key Responsibilities:Oversee complex, high-impact Ads Trust & Safety problem areas from strategy through execution.Design and scale operational workflows for Ads Trust & Safety, encompassing enforcement models, review processes, escalation paths, and quality frameworks.Work closely with Product, Policy, and Engineering teams to translate risk and policy requirements into scalable systems, tools, and automation.Drive operational readiness for new Ads and monetization launches, features, and markets, identifying risks early and ensuring appropriate mitigations are in place.Leverage data to identify trends, gaps, and emerging risks across Ads surfaces; develop proposals for enhancements.

Feb 24, 2026
Apply
Discord Inc. logo
Full-time|$248K/yr - $279K/yr|On-site|San Francisco Bay Area

Discord, a platform frequented by over 200 million users monthly, thrives on its vibrant gaming community, where more than 90% of users engage in gaming activities. With a staggering 1.5 billion hours spent playing diverse titles each month, Discord is pivotal in shaping the gaming landscape. Our mission is to enhance social interactions for gamers before, during, and after gameplay.We are seeking an outstanding Trust & Safety Counsel to join our dynamic legal team. This influential role offers the opportunity to contribute significantly at one of the most exciting companies in the tech industry! As our second Trust & Safety Counsel, you will be integral in supporting our Trust & Safety organization, addressing law enforcement data requests, identifying and removing harmful content and actors, and ensuring compliance with international laws and regulations.

Mar 17, 2026
Apply
Lyft, Inc. logo
Full-time|On-site|San Francisco, CA

Role overview The Senior Manager, Trust & Safety Policy at Lyft leads the team that shapes and updates policies to protect riders and drivers. This position ensures Lyft’s standards align with legal requirements and promote a secure experience on the platform. The role involves both policy development and hands-on implementation. Key responsibilities Guide a team dedicated to creating and carrying out trust and safety policies Draft and update policies that keep users safe while meeting legal and regulatory standards Collaborate with colleagues from multiple departments to design solutions that work in practice Share policy changes and decisions clearly throughout the company What Lyft looks for Ability to think strategically and solve complex problems Strong communication skills Experience working with teams across different functions Background in trust and safety, policy, or a related area is helpful Location San Francisco, CA

Apr 27, 2026
Apply
Airbnb, Inc. logo
Full-time|$248K/yr - $310K/yr|Remote|Remote - US

Airbnb started in 2007 when two hosts welcomed three guests into their San Francisco home. Since then, the platform has grown to over 5 million hosts and more than 2 billion guests worldwide. Hosts offer unique stays and experiences that connect travelers with local communities. Trust Engineering at Airbnb Trust sits at the heart of Airbnb’s platform. The Trust Engineering team builds technology to keep the community safe and uphold high standards for hosts, guests, homes, and experiences. Their work addresses both online risks, such as account compromise, fake listings, and financial loss, and offline concerns like theft, property damage, and personal safety. The team’s responsibilities include user onboarding, screening, identity, and reputation systems. Trust Engineering leads the technical vision for these systems and integrates them throughout Airbnb’s platform. Role overview The Senior Staff Software Engineer, Trust, is a senior individual contributor role. This engineer partners with technical leaders across Airbnb to shape, plan, and deliver a broad roadmap of Trust engineering projects. The position involves extensive collaboration with teams throughout the company. While highly senior, this is still a hands-on engineering role, every Airbnb software engineer, regardless of level, contributes code and development work. What you will do Define and drive the long-term vision and strategy for the Trust Platform, setting architectural direction for core systems that support scalable, high-quality fraud detection, safety, and trust decisions across Airbnb. Work deeply within Trust Platform components, developing system and performance tools, and identifying ways to improve technical quality, operational excellence, and developer experience. Promote an AI-first engineering approach, using LLM-powered agents to generate and refine code, so you can focus on problem-solving, system design, and quality oversight. Location This position is remote and based in the United States.

Apr 21, 2026
Apply
Databricks logo
Full-time|$192K/yr - $260K/yr|On-site|San Francisco, California

Join Databricks, where we are dedicated to creating the most advanced and secure platform for data and AI. Our commitment to innovation drives us to develop cutting-edge solutions in security, compliance, and governance.As a vital member of the Trust and Safety Data Science team, you will engage in projects that are essential for maintaining the security and regulatory compliance of the Databricks Platform. Our clients rely on Databricks to safeguard their data while managing millions of virtual machines across three clouds in numerous regions worldwide.Our engineering teams design highly sophisticated products that address significant real-world challenges. We continuously strive to push the limits of data and AI technology, all while ensuring the security and scalability that are crucial for our customers' success on our platform. We cater to a diverse array of companies with different security and compliance needs. Understanding how our customers utilize our existing features is imperative, involving comprehensive, data-driven analysis of all facets of Databricks' security programs.Customers entrust us with their most critical data, and our mission is to establish the most reliable data analytics and machine learning platform globally. We are expanding our Trust and Safety Data Science team and seek talented individuals to join our group of “full stack” data scientists. Collaborating closely with engineering and security teams, you will focus on strategic initiatives that enhance the security and safety of Databricks for our clients. Our team employs advanced statistical and machine learning techniques to detect fraud and abuse across our platforms, utilizing state-of-the-art methodologies. For insights into our initiatives, check out our blog post. Engaging in fraud and abuse detection is dynamic and crucial, offering you a chance to significantly impact the security and efficiency of business operations.For further information, please visit https://www.databricks.com/trust.

Jan 30, 2026
Apply
DoorDash logo
Full-time|$193.8K/yr - $285K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY; Chicago, IL

About the TeamThe Trust & Safety, Integrity, and Fraud Product team at DoorDash is committed to creating a secure and reliable experience for all users on our platform, including Consumers, Merchants, and Dashers. We address intricate challenges such as fraud prevention, account takeover prevention, authenticity verification, and regulatory compliance, all while ensuring a seamless user experience. Our collaborative efforts with cross-functional teams—including Engineering, Data Science, Compliance, and Risk Operations—drive strategic initiatives that safeguard our business while promoting growth.About the RoleAs DoorDash expands beyond restaurants into a broader marketplace, our commitment to safety and trust remains paramount. We are seeking a Senior Product Manager to spearhead cross-functional teams tackling complex challenges that affect Consumers, Dashers, and Merchants. Your role will encompass various aspects of our business—from new user onboarding and in-app experiences to innovative products and services that are unparalleled in the market. Depending on your expertise, you will either lead a vertical fraud team focused on protecting Consumers or a horizontal fraud platform team dedicated to enhancing our Risk Engine, Data Signals Intelligence, and Automation/Anomaly Detection capabilities. This is an exceptional opportunity to shape the future of DoorDash during a period of rapid growth and significant impact.You’re excited about this opportunity because you will…Establish the vision and long-term product strategy for a vertical or horizontal fraud team.Develop and implement a customer-centric product roadmap in close collaboration with senior leadership, Operations, Data Science, Analytics, Design, and Engineering teams.

Feb 5, 2026
Apply
OpenAI logo
Full-time|Hybrid|San Francisco

About Our TeamThe Safety Systems team is seeking a dedicated Technical Program Manager who will play a pivotal role in optimizing our comprehensive safety framework and integrating diverse safety research and mitigations into ChatGPT and our API. This position is essential for the secure deployment of our innovative models by synthesizing contributions from various stakeholders, including research, product development, engineering, legal, and policy teams, to ensure all risks are effectively monitored, mitigated, or resolved.About the RoleIn the position of Safety Engineering Technical Program Manager, you will oversee critical responsibilities such as tracking progress in safety engineering and managing risk assessments. You will also supervise key data infrastructure initiatives, acting as a crucial connector to enhance the implementation of OpenAI's safety systems. Moreover, you will develop and execute a computing roadmap for your team, ensuring that our primary objectives are adequately resourced while capitalizing on new opportunities for significant safety infrastructure investments. Your primary focus will be to establish a foundational layer that supports the safety of all our models and products.This role is located in San Francisco, CA. We operate on a hybrid work model, requiring employees to be in the office three days a week, and we provide relocation assistance to new hires.Responsibilities:Manage key risk domains and engage with relevant stakeholders.Collaborate directly with safety engineers, engineering managers, and product managers to establish a unified safety infrastructure.Oversee data and computational infrastructure, including capacity planning and data residency.Design and implement essential internal programs, including incident management and processes for regularly updating safety mitigations.Prioritize and manage a portfolio of infrastructure requests from internal teams.Ideal Candidates Will:Hold a Bachelor's or Master's degree in Computer Science or Computer Engineering, or possess substantial engineering expertise.Demonstrate a proven history of delivering complex technical projects on time and to high standards.Exhibit strong technical skills and have effectively collaborated with top-tier engineering and research teams.Show expertise in creating and implementing straightforward, scalable processes that address intricate challenges.Possess excellent communication and interpersonal skills to work across various teams.

Mar 16, 2026
Apply
OpenAI logo
Full-time|On-site|San Francisco

Role Overview OpenAI is seeking an Environmental Health and Safety Engineering & Technical Program Manager in San Francisco. This role centers on maintaining a safe workplace and guiding projects focused on environmental health and safety. Key Responsibilities Oversee safety protocols and ensure compliance with relevant regulations. Develop and implement strategies to improve workplace safety and environmental health. Work closely with engineering teams and other stakeholders to embed safety measures into all engineering processes. Collaborate across departments to share best practices and support OpenAI’s commitment to responsible AI development. About Working at OpenAI This position offers the chance to contribute to OpenAI’s mission by shaping a safe and healthy work environment while partnering with teams across the organization.

Apr 15, 2026
Apply
Cloudflare, Inc. logo
Full-time|Hybrid|Hybrid

Join Cloudflare as an Escalation Engineer specializing in Zero Trust security. In this pivotal role, you will tackle complex technical challenges and provide high-level support to our clients, ensuring they receive the best possible service. Your expertise will be critical in helping clients implement and optimize their Zero Trust frameworks.

Mar 10, 2026
Apply
Intrinsic Safety logo
Full-time|$100K/yr - $100K/yr|On-site|San Francisco

Join our innovative team at Intrinsic Safety, where we leverage cutting-edge technologies to tackle some of the most challenging issues of the digital era using safe and effective AI. Your role will be pivotal in enabling Trust & Safety teams to minimize time spent on tedious manual reviews and investigations, empowering them to focus on what truly matters. By transforming the methods these teams use to protect their communities from various threats, including spam, scams, misinformation, hate speech, and physical security issues, you will significantly impact the lives of many individuals. We are experiencing rapid growth, serving major social media and online service platforms.We are seeking our inaugural Business Development Representative (BDR) to spearhead pipeline growth and assist in scaling our sales efforts. In this role, you will be responsible for outbound prospecting, qualifying leads, and scheduling meetings with decision-makers across online marketplaces and digital platforms. Collaborating closely with the go-to-market (GTM) team, you will refine messaging, optimize outreach strategies, and cultivate relationships with potential customers. This is a high-impact position that offers the chance to shape our sales approach and advance your career as we grow.This position requires in-person attendance at our San Francisco office.Key Responsibilities:Generate and qualify leads through strategic outbound prospecting, focusing on trust & safety, legal, and compliance leaders.Implement multi-channel outreach strategies via email, phone, and LinkedIn to engage key decision-makers and secure discovery meetings.Conduct initial conversations to assess prospect needs, evaluate fit, and ensure seamless transitions to the GTM team.Accurately track outreach activities, interactions, and pipeline progress in our CRM system.Provide insights from customer interactions to refine our sales messaging, ideal customer profile, and go-to-market strategy.

Mar 31, 2025
Apply
Lila Sciences logo
Full-time|$192K/yr - $272K/yr|On-site|Cambridge, MA USA; San Francisco, CA USA

Lila Sciences is forming a dedicated AI safety team to address the unique risks and challenges posed by scientific superintelligence. The company seeks a Senior or Principal Technical Program Manager to guide the operational side of AI safety research, helping to shape how the team approaches complex and evolving problems. Role overview This Technical Program Manager position connects research, engineering, model development, policy, and executive leadership. The work involves translating fast-moving research into structured, accountable plans. While this is not a research role, curiosity about the technical aspects of AI safety is important. The team values clear communication and the ability to bring clarity and structure as the organization expands. What you will do Act as the primary communication link between the AI safety team and technical, research, and scientific groups. Share complex results and coordinate resource needs. Establish information flows to keep teams connected. Promote accountability within cross-functional, distributed teams, building consensus and trust through open communication and sound judgment. Support rapid experimentation and iteration by refining and applying effective program management practices. Create clear documentation and reports to communicate vision, track progress, and ensure alignment with company objectives. Accurately represent program status and risks, even in uncertain or shifting situations. Requirements Bachelor’s or Master’s degree in Computer Science, Engineering, Life Sciences, or a related discipline. Minimum of 6 years of program or project management experience in technology or life sciences. Demonstrated success in program management, leading cross-functional teams, and delivering projects. Strong analytical and problem-solving abilities, with skill in turning technical requirements into actionable plans. Excellent written and verbal communication skills, including experience preparing executive-level documents, roadmaps, and updates. Location This position is based in Cambridge, MA or San Francisco, CA, USA.

Apr 24, 2026
Apply
Sofi logo
Full-time|Remote|WA - Seattle; UT - Cottonwood Heights; CA - San Francisco; NY - New York City; TX - Frisco

Join Sofi as a Senior Insider Trust & Fraud Investigator, where you will play a pivotal role in safeguarding our customers and ensuring their trust. You will leverage your expertise to investigate and mitigate insider threats and fraudulent activities, contributing to a secure financial environment.

Mar 25, 2026
Apply
Ripple Labs Inc. logo
Full-time|$208K/yr - $260K/yr|On-site|San Francisco, CA, United States

At Ripple, we are pioneering a future where value is transferred with the same ease as information. Our bold vision is already in motion as we provide innovative cryptocurrency solutions to financial institutions, businesses, governments, and developers. By enhancing the global financial ecosystem, we aim to create economic equity and opportunities for countless individuals worldwide. Join us in doing remarkable work, advancing your career, and collaborating with a supportive team.If you're eager to witness the impact of your work and unlock incredible career advancement, come aboard and help us create tangible value.THE WORK:At Ripple, the Identity and Trust Platform is envisioned as the cornerstone of our "One Ripple" initiative. This platform aims to address identity-related challenges across all products and acquisitions by establishing a unified system of record for customers and their identities. It will separate operational identity from outdated systems, harmonize compliance and verification contexts (such as shared Know Your Business readiness), and create a common entitlements layer for consistent product access.We seek Staff Engineers who are driven by a passion for tackling complex, foundational issues, eager to create significant impacts by addressing identity and trust obstacles in a global financial network, and motivated to design a scalable platform that will enhance the unified customer experience for all of Ripple's current and future offerings.The Identity and Trust Platform is essential for fostering customer trust and establishing a singular system of record. As the technical leader of a small yet impactful team, your responsibilities will include:Building the Foundation for 'One Ripple': Establishing the Identity Platform as the singular system of record, decoupling operational identity from outdated systems to facilitate a unified experience across all Ripple products (Payments, Custody, Stablecoins, Ripple Prime, and others).Driving Strategic Decoupling: Taking ownership of the crucial migration of core identity logic from legacy systems, eliminating years of technical debt and paving the way for their retirement.Defining the Future of Trust: Designing and implementing the shared entitlements layer and unified compliance context (e.g., shared KYB readiness) to ensure consistent and secure access to products for all customers.Scaling and Mentoring: Acting as a technical anchor, setting architectural direction, promoting engineering excellence, and mentoring engineers to cultivate a high-performing foundational platform team.

Mar 17, 2026

Sign in to browse more jobs

Create account — see all 5,547 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.