AI Security Researcher Jobs in United Kingdom

3,185 jobs found

1 - 20 of 3,185 Jobs
Apply
Wiz Inc. logo
Full-time|Remote|Remote - United Kingdom

Join Wiz, a pioneering company reshaping cloud security and empowering businesses to excel in the cloud. As the fastest-growing startup, Wiz is on a mission to help organizations secure their cloud environments and accelerate their growth. With a proven track record of success and a culture that values top-tier talent, we invite you to be part of our journey…

Mar 4, 2026
Apply
Moonvalley AI logo
Full-time|On-site|UK

Join our innovative AI research lab at Moonvalley AI, where we are pioneering foundational intelligence for the physical world.Our advanced models tackle intricate real-world challenges, enabling intelligent robots, next-gen wearables, and media solutions.We operate at the cutting edge of world models, video generation, and robotics, crafting systems that understand complex environments, reason about objects and dynamics, and seamlessly translate high-level AI intentions into safe, efficient, and responsive actions across various robotic platforms.In this role, you will collaborate closely with Machine Learning researchers to develop multimodal world models and generative systems, and work alongside hardware teams and partners to ensure optimal performance of robotic platforms and sensing dynamics in real-world applications.Key ResponsibilitiesFoundation Development: Team up with our world model experts to create a state-of-the-art training and simulation platform for robotics.World Representation: Innovate persistent 3D/4D scene representations that ensure temporal consistency.Intelligence Advancement: Enhance robotics planning and decision-making capabilities using cutting-edge world models.Sim2Real Execution: Guarantee that sensing and system dynamics operate reliably in high-stakes, real-world scenarios.Boundary Pushing: Collaborate with leading ML researchers to innovate generative models and advance physical AGI.Areas of InterestWe are particularly keen on candidates with expertise in:World Models for Embodied Systems3D / 4D Generative Models3D Reconstruction and Scene UnderstandingEmbodied AIObject-level / Semantic SLAMMultimodal AIQualificationsA strong background in robotics or computer vision.Proven experience with 3D perception, scene representations, or world models.Experience collaborating with interdisciplinary teams to achieve complex objectives.

Mar 13, 2026
Apply
Wiz Inc. logo
Full-time|On-site|London, UK

Join the pioneering force in cloud security at Wiz, where we empower businesses to excel in the cloud. Recognized as the fastest-growing startup ever, we are on a mission to enhance the security of cloud environments, enabling organizations to accelerate their growth. Trusted by security teams globally, we boast a solid track record of success and a culture that champions exceptional talent.Our diverse team, comprising Wizards from over 20 countries, collaborates to safeguard the infrastructure of hundreds of clients, including more than half of the Fortune 100. We scan and secure over 230 billion files each day. As a leading player in a rapidly expanding market, this is your opportunity to make a profound impact. At Wiz, you’ll have the creative freedom to innovate and fully utilize your skills to contribute to our remarkable growth. Join us in crafting secure cloud environments that empower top companies to move swiftly.SUMMARYWe are seeking a skilled AI Security Researcher to become an integral part of Wiz’s risk-driven approach to cloud security. This position demands extensive technical research into intricate cloud and AI-native environments to uncover the most critical, unaddressed risks.WHAT YOU’LL DOConduct in-depth technical research to identify and report new risks and attack vectors specific to modern cloud- and AI-native architectures.Identify and communicate the most significant unaddressed risk areas, collaborating with Product and Engineering teams to translate research into actionable product features.Establish essential foundational product capabilities through compelling proofs of risk (demonstrating impact) and technical proofs of concept (showing solutions).Collaborate closely with Product and Engineering teams to guarantee comprehensive risk coverage and support the development of innovative security solutions.

Mar 4, 2026
Apply
Obsidian Security logo
Full-time|On-site|Manchester, Uk

Established in 2017, Obsidian Security aims to address a critical need: securing the SaaS applications that underpin modern business operations—such as Microsoft 365, Salesforce, and many others. Supported by esteemed investors like Greylock, Norwest Venture Partners, and IVP, we've developed a comprehensive SaaS security platform designed to minimize risks, detect and respond to threats, and thwart breaches at their source. Our team comprises industry leaders who have shaped the endpoint and identity security landscape at prominent firms including CrowdStrike, Okta, Cylance, and Carbon Black. Currently, we are redefining SaaS security in the age of agentic AI. Today, Obsidian is the trusted choice for global enterprises such as Snowflake, T-Mobile, and Pure Storage, safeguarding over 200 organizations across North America, Europe, the Middle East, Southeast Asia, Australia, and New Zealand—including many Fortune 1000 and Global 2000 companies. With significant global momentum, an expanding partner ecosystem featuring SentinelOne, Databricks, and Google Cloud, and a major fundraising effort on the horizon, we are rapidly scaling towards sustainable growth and IPO readiness. Join us in shaping the future of SaaS security!About the RoleAt Obsidian, we are committed to clarity in a complex landscape. We seek Security Research Engineers who excel at the intersection of security research and product engineering. You will delve deeply into SaaS platforms, develop impactful product features, and contribute to the defensive strategies that protect our customers at scale.This role is perfect for individuals who enjoy gaining a profound understanding of intricate systems, think like attackers, and aspire to create market-leading security solutions for Obsidian's clientele.What You’ll DoDevelop a thorough understanding of major SaaS and AI platforms to uncover essential security insights.Investigate and prototype solutions to integrate data across SaaS and AI platforms, identifying hazardous combinations of risks that may lead to significant compromises.Recognize and respond to emerging attacker techniques against SaaS and AI platforms, innovating new methods to detect or mitigate future attacks.Utilize AI to enhance research into SaaS and AI platform security.

Mar 16, 2026
Apply
MazeHQ logo
Full-time|Remote|Remote (Europe)

Role Overview:Join MazeHQ as a Security Engineer & Researcher, where you'll play a pivotal role in reimagining security risk evaluation amidst the evolving landscape of AI-driven vulnerability detection. This is an exciting chance to contribute to our dynamic security research team at a leading startup blending generative AI with cybersecurity. Your expertise will significantly influence how our AI models comprehend and prioritize threats to cloud security.In this position, you will act as the key human-in-the-loop specialist, scrutinizing cloud vulnerability outputs from our AI systems, conducting thorough research to authenticate and contextualize threats, and generating critical labels that educate our models to differentiate between significant risks and trivial alerts. Collaborating with a team of security researchers, you will enhance our labeling processes and provide essential insights to guide product development based on the real-world threat patterns you uncover.This role is ideal for a security researcher eager to lead the charge in AI-driven threat detection, passionate about exploring cloud security vulnerabilities, and looking to amplify their security insights through state-of-the-art technology while being part of a vibrant team.

Oct 20, 2025
Apply
toloka-ai logo
Contract|Remote|Remote — United Kingdom

Join our innovative team at toloka-ai as a Freelance Research Physicist specializing in AI training. This remote position invites you to contribute your expertise in physics and artificial intelligence to advance our projects and solution development.As an AI Trainer, you will be responsible for designing and conducting experiments, interpreting data, and sharing your findings with our team to enhance our AI models. If you have a passion for research and the application of physics principles in technology, we want to hear from you!

Apr 13, 2026
Apply
Mindrift logo
Part-time|$35/hr - $35/hr|Remote|Remote — Glasgow, Scotland, United Kingdom

Please submit your CV in English and specify your level of English proficiency. Mindrift matches skilled professionals with freelance, project-based work focused on testing and improving AI systems for technology companies. This is not a permanent position; assignments vary by project. Role overview This freelance AI Trainer role is designed for research physicists interested in part-time, flexible work. Assignments center on creating and documenting advanced physics problems for AI training. Tasks may include: Designing original optics problems that reflect real research workflows in physics Developing computationally intensive challenges that cannot be solved manually in a reasonable timeframe Writing problems requiring advanced reasoning in mechanics, electromagnetism, thermodynamics, and quantum mechanics Basing assignments on authentic research or practical applications in optics and physics Clearly documenting problem statements and supplying verified, correct solutions Requirements Degree in Physics (theoretical, experimental, or computational) or a closely related field At least 2 years of experience in applied, research, or teaching roles Familiarity with numerical simulation techniques Ability to design problems reflecting real-world physics research workflows Creative problem-solving across multiple physics disciplines Understanding of physics modeling and approximation methods Advanced written English skills (C1 level or higher) Application process Apply Pass qualification(s) Join a project Complete assigned tasks Receive compensation Time commitment Project phases typically require 10–20 hours per week. Actual workload depends on project needs and may vary outside of active phases. Compensation Contributors can earn up to $35 per hour, depending on expertise, pace, and project complexity. Pay rates may differ for other projects on the platform. Location: Remote , Glasgow, Scotland, United Kingdom

Apr 24, 2026
Apply
Apollo Research logo
Full-time|On-site|London

Application Deadline: We are actively conducting interviews and aim to fill this position promptly as soon as we find the right candidate.THE OPPORTUNITYApollo Research is searching for a Senior Security Engineer to take full ownership of security protocols and practices within our organization. As the first dedicated security hire, you will play a crucial role in maintaining the trust of our innovative AI lab partners and supporting our research mission. This position is embedded within the engineering team and reports directly to the CEO.YOUR RESPONSIBILITIES WILL INCLUDE- Establishing and leading Apollo's security program. You will create and manage the security roadmap, perform risk assessments, and adapt the program as the organization evolves, defining the security posture in relation to our size, threat model, and partner relationships.- Fostering trust with our AI lab partners. You will be the primary contact for security teams of our partners, building relationships with their CISOs, and ensuring our security practices are documented and meet the necessary standards for our partnerships.- Setting security strategies for engineering. You will define security principles and an AppSec strategy that the engineering team will adopt, creating efficient pathways for secure development.- Defining the use of AI tools and integrations at Apollo. You will determine approved tools, data handling procedures, and vetting processes for new technologies, ensuring a balance between security and the need for cutting-edge research tools.- Managing our security tooling stack and automating operations. You will select, implement, and oversee security controls such as EDR/MDR, endpoint management, email protection, and identity management, while automating processes wherever feasible.- Leading compliance and certification efforts. You will spearhead certification initiatives (ISO 27001, SOC 2) as required, integrating compliance into our security practices.- Managing IT administration across the organization. You will oversee Google Workspace and other IT resources.

Mar 6, 2026
Apply
Hudson River Trading (HRT) logo
AI Researcher

Hudson River Trading (HRT)

Full-time|$200K/yr - $300K/yr|On-site|London, United Kingdom; New York, NY, United States; Seattle, Washington, United States

Join Hudson River Trading (HRT) as an AI Research Scientist in our cutting-edge HAIL (HRT AI Labs) team. This team is dedicated to creating and refining advanced models that empower our trading operations, significantly influencing our market strategies. We are on a mission to develop 'foundation models for markets' that analyze extensive market data to forecast future trends. As an integral member of our small, agile team, you will have the freedom to explore innovative research paths and contribute to impactful solutions. Our state-of-the-art research infrastructure, featuring high GPU-to-researcher ratios, and our robust support teams will enable you to bring your vision to life. Your contributions will directly affect our business outcomes, tackling complex challenges without straightforward solutions. You will engage in enhancing every aspect of our models, from data featurization to architectural design and training methodologies, ultimately influencing trading decision-making.

Feb 9, 2026
Apply
PostHog logo
Full-time|Hybrid|Hybrid (UK)

About PostHogAt PostHog, we are dedicated to delivering comprehensive solutions that empower businesses from their inception to their public offering and beyond. We serve as the quintessential operating system for software developers.Originating from open-source product analytics, we emerged from Y Combinator's W20 cohort. Since then, we have successfully launched over a dozen innovative products, including:An integrated data warehouse that allows users to seamlessly query product and customer data using custom SQL insights.A customer data platform enabling effortless data integration across various channels.PostHog AI, our AI-driven analyst that assists users with product inquiries, uncovers valuable session recordings, and crafts tailored SQL queries.Looking ahead, we are excited to develop CRM, Workflow, revenue analytics, and support solutions. When we claim to offer every essential product for businesses, we stand by that promise!Our ethos includes:Product-driven: With over 100,000 companies utilizing PostHog, our growth has primarily stemmed from organic word-of-mouth. Our product-market fit is exceptionally strong.Default alive. Our revenue grows at an average rate of 10% month-over-month, reflecting our operational efficiency. We secure funding to fuel our ambition and accelerate growth, not merely to sustain operations.Well-capitalized: Having raised over $100 million from top-tier investors, we are well-positioned for a long-term, ambitious journey.We are committed to creating an outstanding product for our users, building a team of exceptional talent, delivering swiftly, and embracing our unique culture.

Feb 5, 2026
Apply
Menlo Security logo
Full-time|On-site|EMEA - UK

At Menlo Security, we are on a mission to empower secure connections and collaborations worldwide. As we navigate the evolving landscape brought on by COVID-19, our commitment to security has never been more critical. We proudly serve a diverse clientele, including Fortune 500 companies, nine out of ten of the largest global banks, and the Department of Defense.As we expand from a team of 400, we are eager to welcome individuals who embody passion, empathy, and agility. The ideal candidate will be ethical, exceptionally organized, and dedicated to seeing tasks through to completion. A service-oriented mindset, along with the humility to accept feedback and the confidence to provide it, is essential.Menlo Security is well-capitalized for growth, supported by top-tier investors including Vista Equity Partners, General Catalyst, JPMC, American Express, HSBC, and Ericsson Ventures.We are searching for a Senior AI Security Engineer dedicated to tackling the security challenges posed by autonomous AI agents. In this pivotal role, you will conduct research, design, and implement innovative strategies to detect and mitigate threats such as prompt poisoning, context manipulation, and malicious agent behaviors targeting AI systems.Collaboration with engineering teams will be key as you translate cutting-edge security research into actionable, deployable security controls, particularly for agents interacting with untrusted web content.Core Responsibilities:Research Emerging Agentic Threats: Investigate novel attack vectors against AI agents, including but not limited to prompt injection, context poisoning, and adversarial content embedding.Architect Scalable Agentic Workflows: Develop robust, high-performance pipelines that secure agent-to-web interactions.Develop Novel Detection & Mitigation Techniques: Design and prototype innovative approaches for identifying malicious prompts and unsafe contextual signals in AI agents powered by large language models.Agent Security Controls: Implement these techniques within agentic runtimes to ensure the safe reasoning of agents over external data sources.

Feb 18, 2026
Apply
Apollo Research logo
Full-time|On-site|London

Application Deadline: We are actively interviewing candidates and aim to fill this position promptly with the right individual.ABOUT THE ROLEAt Apollo Research, we conduct evaluations that meticulously assess the risks associated with advanced AI systems. Collaborating with leading laboratories such as OpenAI, Anthropic, and Google DeepMind, you will have the unique opportunity to engage with cutting-edge AI models ahead of their public release. The ideal candidate possesses a passion for rigorously testing state-of-the-art AI technologies and excels in creating and automating efficient evaluation pipelines.YOUR RESPONSIBILITIES WILL INCLUDE- Conducting pre-deployment evaluation campaigns on the world's most advanced AI systems. Our partnerships with various labs provide access to a wide range of models that no single lab can offer, allowing you to be among the first to engage with new models.- Exploring AI cognition by analyzing extensive model transcripts to identify novel behavioral patterns that have yet to be documented. These insights can be both surprising and enlightening, including phenomena like non-standard language and reward-seeking reasoning as discussed in our anti-scheming paper.- Developing new evaluations for frontier risks, creating innovative test environments, and scaling these across multiple scenarios.- Collaborating with leading AI developers to share your insights, receive feedback, and ensure your evaluations influence the deployment strategies of the most advanced AI systems.- Optimizing and automating the evaluation pipeline. We utilize automation in building, executing, and analyzing evaluations. As agent capabilities evolve rapidly, you will have the autonomy to reshape the pipeline to keep pace with these advancements.KEY REQUIREMENTS

Feb 13, 2026
Apply
Moneybox logo
Full-time|On-site|London

About MoneyboxAt Moneybox, we strive to empower individuals to enrich their lives. We believe that wealth is not merely about money but rather about having the resources for more—more freedom, opportunities, and peace of mind. As an award-winning wealth management platform, we assist over 1.5 million users in building their financial futures through saving, investing, purchasing homes, and planning for retirement.Job OverviewWe are developing Aurora, an innovative AI system aimed at guiding customers to achieve optimal financial outcomes. This role presents a significant technical challenge: how to effectively provide reliable guidance to customers who may have incomplete or uncertain information regarding their financial situations and goals, all while navigating a regulated environment where decisions must be auditable and traceable.Key challenges include efficiently addressing uncertainties in customer data through active information gathering, formulating questions at the right times without overwhelming users, and translating natural language policies and regulations into formal optimization logic that is both accurate and inspectable.Additionally, you will need to integrate learned and symbolic components to ensure that the overall system operates reliably, degrades gracefully, and remains comprehensible to human users, all without incurring excessive engineering costs on the specialized elements of the system.We hold strong hypotheses and established architectural plans for these challenges, yet we remain open to revising our approaches when presented with compelling arguments or new evidence. If you have insights that could improve our strategies, we welcome your input.Our models primarily operate internally. Our development process utilizes Databricks on Azure, with deployments conducted via Databricks or directly on Azure Kubernetes Service (AKS).This role represents the pinnacle of research within our ML team. You will report directly to the Director of AI and Decision Intelligence and collaborate with a principal data scientist, senior ML engineer, senior data scientist, and two ML engineers.

Feb 27, 2026
Apply
Graphcore logo
Full-time|On-site|Bristol, UK

Join Graphcore: Pioneering the Future of AI ComputingAt Graphcore, we are at the forefront of AI computing, driven by a team of leading semiconductor, software, and AI specialists. Our expertise spans the entire AI compute stack—from silicon to software and extensive datacenter infrastructure. As a proud member of the SoftBank Group, we are backed by substantial long-term investments, enabling us to contribute significantly to the rapidly expanding SoftBank AI ecosystem. As we embark on this exciting journey, we are expanding our global teams, uniting brilliant minds to tackle the most challenging problems in AI, while ensuring that every team member can make a meaningful impact on our company, products, and the future of artificial intelligence.Your Role as a Research ScientistIn this pivotal role, you will drive the advancement of AI research, exploring innovative ideas that challenge the boundaries of critical AI and machine learning problems. We recognize that specialized hardware has been instrumental in the evolution of AI over the past decade, and we firmly believe that the synergy between hardware-aware AI algorithms and AI-aware hardware development will continue to propel this dynamic field forward. We seek passionate scientists and engineers equipped with the theoretical knowledge and practical skills necessary for groundbreaking AI research.We are especially interested in candidates with experience in low-power, edge, and embodied AI applications, such as robotics, autonomous driving, and augmented/virtual reality. Your expertise in training and deploying multimodal AI models in these contexts will be invaluable, focusing on areas like world models, real-time computer vision, and reasoning over audio and video streams.About Our Research TeamOur team at Graphcore Research is engaged in both foundational and applied research, aiming to define the computational needs of machine intelligence while demonstrating how hardware can facilitate the creation of next-generation AI models. We are proud to publish our findings at leading AI and machine learning conferences (including NeurIPS, ICML, and ICLR) and collaborate with research teams and organizations around the globe.We foster a collaborative and supportive environment, organizing our efforts around individual research interests to tackle challenges in domains such as efficient computation, model scaling, and the distributed training and inference of AI models across diverse modalities and applications, including sequence- and graph-based data. Our offices span London, Cambridge, and Bristol, promoting cross-location projects and discussions.To better understand our work, feel free to explore our research papers and articles.

Mar 13, 2026
Apply
Graphcore logo
Full-time|On-site|Bristol, UK; Cambridge, UK; London, UK

Join Graphcore as a Research Scientist At Graphcore, we are at the forefront of AI compute innovation. Our team, comprised of semiconductor, software, and AI experts, is dedicated to developing an integrated AI compute stack, spanning from silicon and software to datacenter-scale infrastructure. As part of the SoftBank Group, we enjoy substantial long-term investment to push the boundaries of AI technology within the rapidly expanding SoftBank AI ecosystem. To harness the immense potential of AI, we are actively growing our global teams and inviting the brightest minds to tackle the most challenging problems, providing everyone the chance to make a significant impact on our company, products, and the future of artificial intelligence. Role Overview As a Research Scientist at Graphcore, your contributions will drive advancements in AI research by exploring innovative concepts that address critical AI/ML challenges. Over the past decade, specialized hardware has been pivotal in AI progress, and we believe that the synergy between hardware-aware AI algorithms and AI-aware hardware will be essential for continuing breakthroughs in this fascinating field. We seek passionate scientists and engineers equipped with the theoretical and practical expertise necessary for impactful AI research. Ideal candidates will have experience in low-power, edge, and embodied AI applications, including robotics, autonomous driving, and augmented/virtual reality. Your work will involve training and deploying multimodal AI models in these contexts, focusing on areas like world models, real-time computer vision, and generating and reasoning over audio/video streams. About Our Team Graphcore Research engages in both fundamental and applied research, aiming to characterize the computational needs of machine intelligence and showcase how hardware can propel the next generation of innovative AI models. We regularly publish our findings at top AI/ML conferences such as NeurIPS, ICML, and ICLR, and collaborate with various research teams and organizations worldwide. We take pride in our supportive and collaborative environment, organizing ourselves around individual research interests to collectively solve problems in areas like efficient compute, model scaling, and distributed training and inference of AI models for diverse modalities and applications, including sequence- and graph-based data. Our teams are spread across London, Cambridge, and Bristol, fostering projects and discussions that connect all our locations. To get a deeper insight into our work, we encourage you to read one of our publications or explore an article on our website.

Mar 13, 2026
Apply
Graphcore logo
Full-time|On-site|Cambridge, UK

About Graphcore At Graphcore, we are pioneering the future of artificial intelligence computing. Our team comprises semiconductor, software, and AI specialists with extensive expertise in developing the complete AI compute stack—from silicon and software to large-scale infrastructure. As a proud member of the SoftBank Group, we benefit from substantial long-term investments, enabling us to contribute essential technology to the rapidly evolving SoftBank AI ecosystem. To capture the immense potential of AI, Graphcore is expanding globally, uniting the brightest minds to tackle the most challenging problems, where every individual is empowered to make a significant impact on our company, our products, and the future of AI. Job Summary As a Research Scientist at Graphcore, you will play a vital role in advancing AI research by exploring innovative ideas that address significant AI/ML challenges. The evolution of AI has been primarily driven by specialized hardware over the past decade, and we believe that developing hardware-aware AI algorithms and AI-optimized hardware will remain crucial for progress in this exciting domain. We seek candidates who are not only curious scientists but also proficient engineers, equipped with both theoretical knowledge and practical skills essential for impactful AI research. We welcome applicants with experience in low-power, edge, and embodied AI applications, including robotics, autonomous vehicles, and augmented/virtual reality. Your expertise will contribute to the training and deployment of multimodal AI models in these contexts, focusing on areas such as world models, real-time computer vision, and reasoning over audio and video streams. The Team The Graphcore Research team engages in both fundamental and applied research to define the computational needs of machine intelligence and showcase how hardware advancements can lead to the next generation of innovative AI models. We actively publish in leading AI/ML conferences (NeurIPS, ICML, ICLR) and participate in specialized workshops while collaborating with various research teams and organizations globally. We take pride in fostering a supportive and collaborative environment, where we organize ourselves around individual research interests to collectively solve challenges in domains such as efficient computation, model scaling, and distributed training and inference of AI models across multiple modalities and applications, including sequence and graph-based data. Our teams are spread across London, Cambridge, and Bristol, with projects and discussions that involve all locations.

Mar 13, 2026
Apply
Apollo Research logo
Full-time|On-site|London

Application Deadline: We are actively interviewing candidates and seek to fill this position promptly with a suitable applicant. THE OPPORTUNITYBecome a vital member of our groundbreaking AGI safety product team and play a key role in transforming intricate AI research into actionable tools aimed at minimizing AI-related risks. In your role as an applied researcher, you will collaborate closely with our CEO (who also acts as Head of Product), product engineers, and the Evals team’s software engineers to develop solutions that enhance AI agent safety for our clients. Currently, we are concentrating on the oversight of AI coding agents to identify failures in safety and security. You will be part of a compact team, which allows you to significantly influence both team dynamics and technological approaches while quickly assuming greater responsibilities. This position is perfect for you if you have a fervent desire to employ empirical research methodologies to enhance the safety of AI systems in practical applications. If you relish the challenge of converting theoretical AI risks into tangible detection mechanisms, thrive in fast-paced environments, and are eager to see your research make a meaningful impact on real-world AI safety, then we would love to hear from you.KEY RESPONSIBILITIESResearch & Development- Collect and catalog coding agent failure modes systematically from real-world instances, public examples, research literature, and theoretical predictions.- Design and execute experiments to evaluate monitor effectiveness across various failure modes and agent behaviors.- Develop and maintain evaluation frameworks to track advancements in monitoring capabilities.- Refine monitoring strategies based on empirical findings, optimizing detection accuracy alongside computational efficiency.- Stay updated with the latest research in AI safety, agent failures, and detection methodologies.- Keep abreast of advancements in coding security and safety vulnerabilities.Monitor Design & Optimization- Create a comprehensive library of monitoring prompts tailored to specific failure modes (e.g., security vulnerabilities, goal misalignment, deceptive behaviors).- Experiment with various reasoning strategies and output formats to enhance monitor reliability.- Design and evaluate hierarchical monitoring architectures and ensemble approaches.

Dec 17, 2025
Apply
aptura logo
Internship|On-site|United Kingdom

About apturaAt aptura, we are pioneers in the realm of artificial intelligence, seamlessly merging human insights with cutting-edge AI technology. Our team, comprised of former leaders from Lazard and Partners Group, is dedicated to transforming industries worldwide, with operational hubs in London and San Francisco. We are on a mission to create AI systems that comprehend and analyze finance, healthcare, and law by collaborating with domain experts to enrich AI training data.Your RoleThis is a unique internship opportunity where you will engage directly with our founding team and seasoned finance professionals. You will contribute to projects that redefine AI's understanding of financial markets through hands-on and analytical work:Financial Modeling: Develop and implement financial models including DCF valuations, comparable company analyses, LBO models, and credit assessments to serve as critical training and evaluation benchmarks for our AI systems.In-depth Financial Analysis: Conduct thorough analyses of earnings reports, dissect financial statements, evaluate credit metrics, and produce research outputs of institutional quality to teach AI models the thought processes of finance professionals.Evaluation Framework Design: Create scoring rubrics and benchmarks to assess AI models' capabilities in real-world finance scenarios, including merger analysis, debt structuring, and equity valuation.Annotation Guidelines Development: Convert intricate finance workflows (such as deal origination, due diligence, and credit committee processes) into detailed, structured labeling instructions for AI training.AI Output Stress Testing: Critique model-generated financial analyses, identifying logical, methodological, or market comprehension errors that a trained analyst would recognize.Collaboration with ML Engineers: Partner across teams to refine prompts, enhance data quality, and boost model performance for finance-specific tasks.We are a dynamic and agile team, and while these responsibilities encapsulate the core of the role, we are seeking adaptable and enthusiastic candidates eager to contribute to our business growth in diverse ways.

Feb 9, 2026
Apply
KnowBe4 logo
Full-time|On-site|Cheltenham, United Kingdom

Join KnowBe4, the world's foremost authority in Human Risk Management, as we redefine security measures for organizations globally. With over 70,000 clients relying on us for more than 15 years, we have set the standard in protecting employees and AI systems alike. Since 2016, our innovative AI-driven solutions have been at the forefront of cybersecurity.Our comprehensive HRM+ platform integrates continuous risk intelligence, cutting-edge technical defenses, and tailored training programs, empowering organizations to cultivate robust security cultures. We specialize in identifying, assessing, and mitigating human risk across workforces, effectively countering threats such as deepfakes and the latest AI-driven challenges.At KnowBe4, we champion a dual commitment: safeguarding organizations against cyber threats while fostering a positive environmental impact. We believe true resilience lies in collective effort, ensuring the safety of our people, data, and planet.

Apr 2, 2026
Apply
Faculty logo
Full-time|On-site|London

Why join Faculty?Founded in 2014, Faculty is on a mission to harness the transformative power of AI, believing it to be the most pivotal technology of our era. With a diverse portfolio of over 350 global clients, we have consistently driven performance improvements through human-centric AI solutions. Discover our tangible impacts here.We focus on responsible AI development rather than chasing trends. Our team excels in innovating, building, and deploying AI solutions that truly matter. We offer unmatched expertise in technology, product development, and service delivery across various sectors including government, finance, retail, energy, life sciences, and defense.As our reputation grows, so does our commitment to finding individuals who share our passion for intellectual curiosity and the ambition to create a positive legacy through technology.AI defines this epoch; join us in exploring its most impactful applications and turning them into reality.About Our TeamOur dedicated team at Faculty engages in vital red teaming and creates evaluations for misuse capabilities in critical domains such as CBRN, cybersecurity, and international security. Our contributions have been recognized, notably in OpenAI's system card for o1.We are committed to conducting foundational research on mitigation strategies, sharing our findings through peer-reviewed conferences, and with national security institutions. Furthermore, we design evaluations for model developers focusing on safety and societal impacts of advanced AI models, showcasing our extensive expertise in the safety domain.About The RoleAs the Principal Research Scientist for AI Safety, you will lead Faculty's dynamic research team, influencing the future of safe AI systems. Your role will encompass overseeing the scientific research agenda centered on large language models and other significant systems. You will guide fellow researchers, spearhead external publications, and align your efforts with Faculty’s mission to develop trustworthy AI, allowing you to make a substantial impact in this fast-evolving field.

Dec 11, 2025

Sign in to browse more jobs

Create account — see all 3,185 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.