About the job
At Menlo Security, we are on a mission to empower secure connections and collaborations worldwide. As we navigate the evolving landscape brought on by COVID-19, our commitment to security has never been more critical. We proudly serve a diverse clientele, including Fortune 500 companies, nine out of ten of the largest global banks, and the Department of Defense.
As we expand from a team of 400, we are eager to welcome individuals who embody passion, empathy, and agility. The ideal candidate will be ethical, exceptionally organized, and dedicated to seeing tasks through to completion. A service-oriented mindset, along with the humility to accept feedback and the confidence to provide it, is essential.
Menlo Security is well-capitalized for growth, supported by top-tier investors including Vista Equity Partners, General Catalyst, JPMC, American Express, HSBC, and Ericsson Ventures.
We are searching for a Senior AI Security Engineer dedicated to tackling the security challenges posed by autonomous AI agents. In this pivotal role, you will conduct research, design, and implement innovative strategies to detect and mitigate threats such as prompt poisoning, context manipulation, and malicious agent behaviors targeting AI systems.
Collaboration with engineering teams will be key as you translate cutting-edge security research into actionable, deployable security controls, particularly for agents interacting with untrusted web content.
Core Responsibilities:
- Research Emerging Agentic Threats: Investigate novel attack vectors against AI agents, including but not limited to prompt injection, context poisoning, and adversarial content embedding.
- Architect Scalable Agentic Workflows: Develop robust, high-performance pipelines that secure agent-to-web interactions.
- Develop Novel Detection & Mitigation Techniques: Design and prototype innovative approaches for identifying malicious prompts and unsafe contextual signals in AI agents powered by large language models.
- Agent Security Controls: Implement these techniques within agentic runtimes to ensure the safe reasoning of agents over external data sources.
