About the job
As an AI Trust Innovation Technologist at SGS, you will play a pivotal role in enhancing our Digital Research & Ventures capabilities. You will be engaged in the active construction, testing, and analysis of AI systems to establish credible independent validation and monitoring services. This position merges profound AI engineering expertise with forward-thinking innovation, focusing on evaluating emerging technologies and startups, and translating practical experimentation into scalable Digital Trust validation solutions.
Key Responsibilities:
- Examine emerging AI technologies and real-world AI system architectures (e.g., LLM-based systems, ML pipelines, multimodal systems) to determine where independent validation, testing, or monitoring by SGS can be technically feasible and beneficial.
- Conduct technical assessments of AI risks, including robustness failures, bias/fairness issues, explainability limits, data integrity risks, cybersecurity vulnerabilities, and potential misuse scenarios (e.g., deepfakes, hallucinations) to identify opportunities for validation or monitoring services.
- Develop, prototype, and assess AI validation methodologies (e.g., adversarial testing, dataset validation, interpretability methods, provenance/watermarking) to evaluate their technical feasibility and scalability for Digital Trust services.
- Interpret AI regulations and standards (e.g., EU AI Act, ISO/IEC AI standards, NIST AI RMF) and translate their technical implications into actionable validation, monitoring, or independent evaluation strategies.
- Collaborate with universities, AI research labs, startups, and technology leaders to monitor advanced AI system developments and explore opportunities for collaboration, experimentation, and validation.
- Evaluate AI startups, tools, and platforms for their technical maturity, architectural soundness, evaluation robustness, and alignment with SGS’s AI Trust objectives.
- Provide expert technical insights and hands-on validation guidance in AI-related build–buy–partner–invest evaluations, assessing model architectures, evaluation methodologies, and system scalability.
- Contribute specialized knowledge to Digital Trust marketing, thought leadership, and internal education on AI trust challenges.
- Work collaboratively with business lines, M&A, R&D, innovation teams, and IT to evaluate AI systems technically, prototype validation methods, and support early-stage AI trust initiatives.
- Build and experiment with AI systems to gain a comprehensive understanding of system behavior, validation challenges, and potential service design implications.

