OpenAI logoOpenAI logo

Protection Scientist Engineer, Intelligence and Investigations

OpenAILondon, UK
On-site Full-time

Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.


Experience Level

Experience

Qualifications

A minimum of 4 years of experience in data science or machine learning. Strong analytical and problem-solving skills. Proven ability to collaborate with cross-functional teams. Familiarity with product safety systems is a plus.

About the job

About Our Team

At OpenAI, our mission is to advance artificial intelligence technology to ensure it serves the best interests of humanity. We understand that the path to achieving this goal involves real-world implementation and continuous improvement based on our findings.

The Intelligence and Investigations team plays a crucial role in this mission by identifying and examining instances of misuse of our products, particularly emerging forms of abuse. This work empowers our partner teams to formulate data-driven product policies and develop extensive safety measures. By thoroughly comprehending abuse patterns, we can empower users to innovate with our technologies safely.

About the Role

The Protection Scientist Engineer position is a multidisciplinary role that blends expertise in data science, machine learning, investigative methodologies, and policy development. As a key member of the Integrity and Investigations team, your primary responsibility will be to design and implement systems that proactively detect and mitigate abuse of OpenAI’s products. This includes establishing robust abuse monitoring frameworks for new releases, maintaining oversight for existing offerings, and innovating defensive systems against our most significant risks. You will also investigate urgent escalations, particularly those that escape our existing safety protocols, requiring a deep understanding of our products and data, and collaborating closely with product, policy, and engineering teams.

This position is based in our London office and involves participation in an on-call rotation to address urgent matters outside regular working hours. Some investigations may require handling sensitive content, including material that is sexual, violent, or otherwise distressing.

Key Responsibilities

  • Define and implement abuse monitoring protocols for new product launches.
  • Enhance existing processes to sustain monitoring operations, focusing on automation of monitoring tasks.
  • Develop and transition systems for detecting, reviewing, and enforcing actions against significant abuse cases.
  • Collaborate with Product, Policy, Operations, and Investigative teams to identify and mitigate key risks, while ensuring Engineering teams have the necessary data and tools.

Ideal Candidate Profile

  • A minimum of 4 years of experience in a related field, with a strong foundation in data science and machine learning.
  • Exceptional analytical skills, with a proven ability to tackle complex problems.
  • Experience working cross-functionally within diverse teams, with strong communication skills.
  • Familiarity with safety systems and protocols in technology environments.

About OpenAI

OpenAI is at the forefront of artificial intelligence innovation, dedicated to ensuring that advancements in AI technology benefit all of humanity. With a commitment to ethical practices and safety, we strive to create solutions that empower users while safeguarding against misuse.

Similar jobs

Browse all companies, explore by city & role, or SEO search pages. View directory listings: all jobs, search results, location & role pages.

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.