About the job
Why Join Faculty?
Founded in 2014, Faculty is at the forefront of artificial intelligence, believing it to be the most transformative technology of our era. Over the years, we have partnered with more than 350 clients globally, enhancing their performance through human-centered AI solutions. Our tangible impact can be explored here.
We prioritize genuine innovation over fleeting trends. Our commitment lies in developing and implementing responsible AI that significantly influences outcomes. Our diverse clientele includes sectors such as government, finance, retail, energy, life sciences, and defense, all of whom benefit from our profound expertise in technology, product development, and delivery.
Our rapidly expanding business and reputation drive us to seek individuals who share our passion for intellectual exploration and aspire to create a positive technological legacy.
AI is a groundbreaking technology; at Faculty, you will have the freedom to conceive its most impactful applications and bring them to fruition.
About Our Team
The Research team at Faculty is dedicated to critical red teaming and developing evaluations for misuse capabilities in sensitive domains, including CBRN, cybersecurity, and international security. We collaborate with leading frontier model developers and national safety institutes, and our contributions have been recognized in OpenAI's system card for o1.
We also engage in fundamental technical research focused on mitigation strategies, with our findings presented at peer-reviewed conferences and shared with national security organizations. Additionally, we create evaluations for model developers across various safety-related areas, highlighting our comprehensive expertise in the safety domain.
Role Overview
We are in search of a Senior Research Scientist to join our high-impact R&D team. You will spearhead innovative research that enhances scientific understanding and drives our goal of developing safe AI systems. This is a vital role within a small, empowered team that conducts essential red teaming and evaluations for frontier models in sensitive fields such as cybersecurity and national security, allowing you to influence the future of safe AI deployment in real-world scenarios.

