About the job
c-serv is forming an AI Red Team focused on testing and improving the security of enterprise AI products used by major organizations. This hands-on position centers on practical applications, not academic research, and sits at the intersection of adversarial machine learning, enterprise security, and governance.
Role overview
The AI Security Engineer designs and executes structured red team engagements across varied AI environments. The objective is to translate technical risks into actionable security improvements that matter to enterprise operations. This role suits those who want to see their risk findings lead to real changes, not just reports.
What you will do
- Conduct adversarial assessments of large language models (LLMs) and AI-powered systems.
- Perform threat modeling at the model, infrastructure, and data levels.
- Lead and manage testing efforts targeting:
- Prompt injection
- Jailbreaking
- Model exploitation
- Data leakage and extraction
- Manipulation of retrieval-augmented generation (RAG) systems
- Document findings in a structured, audit-ready format.
- Map vulnerabilities and remediation plans to frameworks such as:
- ISO 27001 controls
- SOC 2 Trust Service Criteria
- ISO 27701 privacy standards
- ISO 27017 cloud security measures
- Collaborate with engineering, security, and compliance teams.
- Present findings and recommendations to executive leadership.
Location
This position is based in Las Vegas, Nevada, United States.
Impact
AI security insights from this work feed directly into enterprise governance frameworks, helping organizations move risk management from theory into daily operations.
