About the job
Join us at Lakera, where innovation meets purpose in the realm of AI security. As a Senior Research Engineer for Security Foundation Models, you will be at the forefront of developing scalable solutions that ensure the integrity and safety of AI applications. We are not just another research lab; we are pioneers engineering the future of AI security with immediate, impactful solutions.
In this role, you will play a crucial part in shaping our strategies, making significant contributions to key decisions, and constructing robust systems that defend against security threats in AI. Your work will involve cutting-edge distributed training, fine-tuning large-scale LLMs, and creating systems that elevate the standards of AI security. You will be instrumental in scaling training across GPU infrastructures and optimizing inference processes, directly impacting how AI can be deployed securely at scale.
About Lakera
At Lakera, we are committed to ensuring that AI behaves as intended. We envision a future where AI agents enhance both our professional and personal lives. Our mission is to construct the security infrastructure necessary for this future, empowering security teams and developers to harness AI technologies effectively. Our collaborations span Fortune 500 companies, startups, and foundational model providers, protecting them and their users from adversarial misalignment. We are also proud creators of Gandalf, the world's leading AI security game.
With offices in San Francisco and Zürich, we thrive on a culture of speed and intensity. Our team operates collaboratively, yet we encourage individual ownership and accountability. Transparency is a core value for us, and we are dedicated to excellence and fostering diverse perspectives to achieve superior outcomes.
Example Projects
Scale training of expansive security foundation models across vast parameter spaces.
Design and optimize distributed training pipelines for efficient post-training of LLMs.
Implement reinforcement learning strategies for large-scale LLM post-training.
Develop adversarial training methods to fortify AI systems against real-world risks.
Engineer resilient ML infrastructure to support high-performance security measures.

