Qualifications
Proven experience in security risk management, preferably in technology or AI sectors; strong leadership skills with a track record of developing and mentoring teams; expertise in risk assessment methodologies, stress testing, and scenario modeling; excellent communication skills, with the ability to convey complex risk concepts to diverse audiences; familiarity with regulatory requirements and industry standards related to security risk management; ability to navigate and prioritize high-stakes, ambiguous situations.
About the job
At Anthropic, we are on a mission to develop AI systems that are reliable, interpretable, and steerable, ensuring they are safe and beneficial for both users and society. As the Director of Security Risk Management, you will take charge of the strategy, implementation, and ongoing enhancement of our comprehensive security risk management program. Reporting to the Head of Security Risk & Compliance, you will spearhead a dynamic team of 4–6 risk engineers, serving as the primary liaison for risk intake, triage, quantification, and assessment throughout our organization. Collaborating closely with senior leadership, you will establish a robust risk governance framework that defines how Anthropic identifies, evaluates, escalates, and mitigates its most critical security risks.
This role requires a proactive leader who not only sets the vision and priorities for the risk function but also actively engages in tackling complex, novel, and high-stakes risk scenarios—especially in areas where traditional frameworks may fall short. You will oversee risk quantification processes such as stress testing and scenario analysis, guide your team through systematic risk assessments, and ensure that risk insights translate into actionable strategies for engineering, product development, and executive leadership.
About Anthropic
Anthropic is a pioneering organization dedicated to building AI systems that are safe, interpretable, and beneficial for users and society at large. Our rapidly expanding team consists of passionate researchers, engineers, policy experts, and business leaders who collaborate to create responsible AI. We foster a culture of innovation and accountability, ensuring that our work contributes positively to the future of technology.