Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Experience Level
Senior Level Manager
Qualifications
What You’ll DoLead and mentor a team of Equity Research Analysts and Associates to produce outstanding research outputs. Ensure seamless project delivery by maintaining the highest standards of quality, accuracy, and relevance. Serve as the primary liaison for clients, offering daily updates and insights. Enhance client satisfaction and retention, proactively addressing needs and identifying new value-add opportunities. Streamline processes to uphold operational excellence across all projects. Collaborate with senior leadership to define research priorities and allocate resources. Train and develop team members, fostering a culture of accountability, learning, and continuous improvement.
About the job
About the Role
Join Woozle Research as an Associate Director in our dynamic London or Glasgow office. This pivotal leadership role is designed for seasoned professionals eager to influence high-caliber primary research utilized by leading hedge funds, private equity firms, and consultancies worldwide.
Your key responsibilities include leading a team of Analysts and Associates, ensuring our deliverables consistently surpass client expectations, and nurturing lasting partnerships with investors and decision-makers.
If you are passionate about guiding teams, enhancing client relationships, and delivering impactful market insights, we want to hear from you!
About Woozle Research
Woozle Research is a forward-thinking research firm dedicated to delivering actionable insights to the world's most influential financial institutions. Our commitment to quality and innovation sets us apart in the equity research landscape.
Full-time|$200K/yr - $300K/yr|On-site|London, United Kingdom; New York, NY, United States; Seattle, Washington, United States
Join Hudson River Trading (HRT) as an AI Research Scientist in our cutting-edge HAIL (HRT AI Labs) team. This team is dedicated to creating and refining advanced models that empower our trading operations, significantly influencing our market strategies. We are on a mission to develop 'foundation models for markets' that analyze extensive market data to fore…
Application Deadline: We are actively interviewing candidates and aim to fill this position promptly with the right individual.ABOUT THE ROLEAt Apollo Research, we conduct evaluations that meticulously assess the risks associated with advanced AI systems. Collaborating with leading laboratories such as OpenAI, Anthropic, and Google DeepMind, you will have the unique opportunity to engage with cutting-edge AI models ahead of their public release. The ideal candidate possesses a passion for rigorously testing state-of-the-art AI technologies and excels in creating and automating efficient evaluation pipelines.YOUR RESPONSIBILITIES WILL INCLUDE- Conducting pre-deployment evaluation campaigns on the world's most advanced AI systems. Our partnerships with various labs provide access to a wide range of models that no single lab can offer, allowing you to be among the first to engage with new models.- Exploring AI cognition by analyzing extensive model transcripts to identify novel behavioral patterns that have yet to be documented. These insights can be both surprising and enlightening, including phenomena like non-standard language and reward-seeking reasoning as discussed in our anti-scheming paper.- Developing new evaluations for frontier risks, creating innovative test environments, and scaling these across multiple scenarios.- Collaborating with leading AI developers to share your insights, receive feedback, and ensure your evaluations influence the deployment strategies of the most advanced AI systems.- Optimizing and automating the evaluation pipeline. We utilize automation in building, executing, and analyzing evaluations. As agent capabilities evolve rapidly, you will have the autonomy to reshape the pipeline to keep pace with these advancements.KEY REQUIREMENTS
About MoneyboxAt Moneybox, we strive to empower individuals to enrich their lives. We believe that wealth is not merely about money but rather about having the resources for more—more freedom, opportunities, and peace of mind. As an award-winning wealth management platform, we assist over 1.5 million users in building their financial futures through saving, investing, purchasing homes, and planning for retirement.Job OverviewWe are developing Aurora, an innovative AI system aimed at guiding customers to achieve optimal financial outcomes. This role presents a significant technical challenge: how to effectively provide reliable guidance to customers who may have incomplete or uncertain information regarding their financial situations and goals, all while navigating a regulated environment where decisions must be auditable and traceable.Key challenges include efficiently addressing uncertainties in customer data through active information gathering, formulating questions at the right times without overwhelming users, and translating natural language policies and regulations into formal optimization logic that is both accurate and inspectable.Additionally, you will need to integrate learned and symbolic components to ensure that the overall system operates reliably, degrades gracefully, and remains comprehensible to human users, all without incurring excessive engineering costs on the specialized elements of the system.We hold strong hypotheses and established architectural plans for these challenges, yet we remain open to revising our approaches when presented with compelling arguments or new evidence. If you have insights that could improve our strategies, we welcome your input.Our models primarily operate internally. Our development process utilizes Databricks on Azure, with deployments conducted via Databricks or directly on Azure Kubernetes Service (AKS).This role represents the pinnacle of research within our ML team. You will report directly to the Director of AI and Decision Intelligence and collaborate with a principal data scientist, senior ML engineer, senior data scientist, and two ML engineers.
Application Deadline: We are actively interviewing candidates and seek to fill this position promptly with a suitable applicant. THE OPPORTUNITYBecome a vital member of our groundbreaking AGI safety product team and play a key role in transforming intricate AI research into actionable tools aimed at minimizing AI-related risks. In your role as an applied researcher, you will collaborate closely with our CEO (who also acts as Head of Product), product engineers, and the Evals team’s software engineers to develop solutions that enhance AI agent safety for our clients. Currently, we are concentrating on the oversight of AI coding agents to identify failures in safety and security. You will be part of a compact team, which allows you to significantly influence both team dynamics and technological approaches while quickly assuming greater responsibilities. This position is perfect for you if you have a fervent desire to employ empirical research methodologies to enhance the safety of AI systems in practical applications. If you relish the challenge of converting theoretical AI risks into tangible detection mechanisms, thrive in fast-paced environments, and are eager to see your research make a meaningful impact on real-world AI safety, then we would love to hear from you.KEY RESPONSIBILITIESResearch & Development- Collect and catalog coding agent failure modes systematically from real-world instances, public examples, research literature, and theoretical predictions.- Design and execute experiments to evaluate monitor effectiveness across various failure modes and agent behaviors.- Develop and maintain evaluation frameworks to track advancements in monitoring capabilities.- Refine monitoring strategies based on empirical findings, optimizing detection accuracy alongside computational efficiency.- Stay updated with the latest research in AI safety, agent failures, and detection methodologies.- Keep abreast of advancements in coding security and safety vulnerabilities.Monitor Design & Optimization- Create a comprehensive library of monitoring prompts tailored to specific failure modes (e.g., security vulnerabilities, goal misalignment, deceptive behaviors).- Experiment with various reasoning strategies and output formats to enhance monitor reliability.- Design and evaluate hierarchical monitoring architectures and ensemble approaches.
Why join Faculty?Founded in 2014, Faculty is on a mission to harness the transformative power of AI, believing it to be the most pivotal technology of our era. With a diverse portfolio of over 350 global clients, we have consistently driven performance improvements through human-centric AI solutions. Discover our tangible impacts here.We focus on responsible AI development rather than chasing trends. Our team excels in innovating, building, and deploying AI solutions that truly matter. We offer unmatched expertise in technology, product development, and service delivery across various sectors including government, finance, retail, energy, life sciences, and defense.As our reputation grows, so does our commitment to finding individuals who share our passion for intellectual curiosity and the ambition to create a positive legacy through technology.AI defines this epoch; join us in exploring its most impactful applications and turning them into reality.About Our TeamOur dedicated team at Faculty engages in vital red teaming and creates evaluations for misuse capabilities in critical domains such as CBRN, cybersecurity, and international security. Our contributions have been recognized, notably in OpenAI's system card for o1.We are committed to conducting foundational research on mitigation strategies, sharing our findings through peer-reviewed conferences, and with national security institutions. Furthermore, we design evaluations for model developers focusing on safety and societal impacts of advanced AI models, showcasing our extensive expertise in the safety domain.About The RoleAs the Principal Research Scientist for AI Safety, you will lead Faculty's dynamic research team, influencing the future of safe AI systems. Your role will encompass overseeing the scientific research agenda centered on large language models and other significant systems. You will guide fellow researchers, spearhead external publications, and align your efforts with Faculty’s mission to develop trustworthy AI, allowing you to make a substantial impact in this fast-evolving field.
Full-time|Hybrid|London, England, United Kingdom / Remote
About Xaira TherapeuticsXaira Therapeutics is a pioneering biotech startup dedicated to harnessing the power of artificial intelligence to revolutionize drug discovery and development. We are at the forefront of creating generative AI models aimed at designing protein and antibody therapeutics, facilitating the development of treatments for historically challenging molecular targets. Our innovative approach also includes the development of foundational models for biological processes and diseases, enhancing target identification and patient stratification. Through these groundbreaking technologies, we strive to unlock novel therapies and improve drug development success rates. With headquarters in the San Francisco Bay Area, Seattle, and London, we are positioned to make a significant impact in the biotech field.Position OverviewWe are on the lookout for passionate and driven individuals to join our team as AI Research Engineers. We embrace candidates from diverse backgrounds and experiences, believing that varied perspectives strengthen our ability to solve complex challenges. Our London office, located near Old Street, fosters a highly collaborative environment where teamwork is essential to our success. We promote a hybrid work culture built on trust, with team members typically working in the office three days a week.Key ResponsibilitiesDemonstrate industry experience as a research engineer within an AI-focused organization.Exhibit enthusiasm for collaboration, learning, and teaching while tackling complex problems as part of a team.Desirable QualificationsWe value diverse experiences and recognize that each individual’s journey is unique. While the following qualifications are ideal, they are not mandatory:Master’s degree or PhD in an AI-related discipline.Contributions to public codebases or GitHub repositories.Experience in building and training neural networks.Familiarity with distributed training and inference.Expertise in profiling and optimizing large-scale AI models.Knowledge in BioAI or related fields.If you possess a strong drive to utilize AI in advancing drug discovery and enhancing human health, we invite you to apply and join us in our mission to create a positive impact in the world.
About the RoleJoin Woozle Research as an Associate Director in our dynamic London or Glasgow office. This pivotal leadership role is designed for seasoned professionals eager to influence high-caliber primary research utilized by leading hedge funds, private equity firms, and consultancies worldwide.Your key responsibilities include leading a team of Analysts and Associates, ensuring our deliverables consistently surpass client expectations, and nurturing lasting partnerships with investors and decision-makers.If you are passionate about guiding teams, enhancing client relationships, and delivering impactful market insights, we want to hear from you!
Why Join Faculty?Founded in 2014, Faculty is at the forefront of artificial intelligence, believing it to be the most transformative technology of our era. Over the years, we have partnered with more than 350 clients globally, enhancing their performance through human-centered AI solutions. Our tangible impact can be explored here.We prioritize genuine innovation over fleeting trends. Our commitment lies in developing and implementing responsible AI that significantly influences outcomes. Our diverse clientele includes sectors such as government, finance, retail, energy, life sciences, and defense, all of whom benefit from our profound expertise in technology, product development, and delivery.Our rapidly expanding business and reputation drive us to seek individuals who share our passion for intellectual exploration and aspire to create a positive technological legacy.AI is a groundbreaking technology; at Faculty, you will have the freedom to conceive its most impactful applications and bring them to fruition.About Our TeamThe Research team at Faculty is dedicated to critical red teaming and developing evaluations for misuse capabilities in sensitive domains, including CBRN, cybersecurity, and international security. We collaborate with leading frontier model developers and national safety institutes, and our contributions have been recognized in OpenAI's system card for o1.We also engage in fundamental technical research focused on mitigation strategies, with our findings presented at peer-reviewed conferences and shared with national security organizations. Additionally, we create evaluations for model developers across various safety-related areas, highlighting our comprehensive expertise in the safety domain.Role OverviewWe are in search of a Senior Research Scientist to join our high-impact R&D team. You will spearhead innovative research that enhances scientific understanding and drives our goal of developing safe AI systems. This is a vital role within a small, empowered team that conducts essential red teaming and evaluations for frontier models in sensitive fields such as cybersecurity and national security, allowing you to influence the future of safe AI deployment in real-world scenarios.
Join our cutting-edge AI team at hyperexponential as a Research Engineer. In this role, you will be at the forefront of developing innovative AI solutions that drive efficiency and transformation across various sectors.As a Research Engineer, you will collaborate closely with data scientists, software engineers, and domain experts to design, implement, and optimize AI algorithms. Your insights will directly contribute to advancing our product offerings and enhancing client experiences.If you are passionate about harnessing AI technology to solve real-world problems, we want to hear from you!
At Moonvalley AI, our cutting-edge laboratory operates at the forefront of world models, video generation, and robotics. We are dedicated to developing sophisticated systems that accurately represent intricate environments, intelligently reason about objects and their dynamics, and seamlessly translate high-level AI objectives into smooth, efficient, safe, and responsive actions across a diverse range of robotic platforms.What You Will Be Responsible For:Spearheading the design and implementation of our robotics research agenda within the AI domain.Recruiting, mentoring, and overseeing a small team of talented research scientists and engineers in our London laboratory.Collaborating closely with world model and simulation teams to create state-of-the-art training platforms for robotics.Directing the development of persistent 3D/4D scene representations along with advanced embodied AI methodologies.Leading research initiatives in scene understanding, sim-to-real transfer, and advanced planning techniques.Building and sustaining partnerships with leading machine learning researchers, hardware experts, and external collaborators.Contributing to the establishment of the lab's technical culture and enhancing its external reputation.Areas of Expertise We Seek: We are particularly interested in candidates who are passionate about working on:World models tailored for embodied systems.3D/4D generative models.Scene reconstruction and comprehension.Embodied AI technologies.Object-level and semantic SLAM methodologies.Multimodal AI applications.Desired Qualifications:Solid foundation in robotics, computer vision, or closely related fields.Experience in leading or managing small technical or research teams.Practical expertise in 3D perception, scene representations, world models, or simulations.Exceptional programming skills in Python, C++, or similar languages.An entrepreneurial mindset and enthusiasm for developing projects in a startup or early-stage lab setting.Proven ability to collaborate effectively across the fields of machine learning, perception, and robotics.Outstanding communication and team-building abilities.What We Offer:Highly competitive salary and equity options.Comprehensive private health insurance.
Become a Catalyst for Change at Axon.At Axon, we are committed to safeguarding lives and enhancing public safety through innovative technology. As pioneers in the field, we tackle critical issues surrounding safety and justice with our comprehensive ecosystem of devices and cloud-based software solutions. Our collaborative culture thrives on open communication and diverse perspectives, both from our customers and within our teams.Working at Axon is not just a job; it's a chance to make a tangible difference. Here, you will take initiative and make a real impact while growing personally and professionally in a fast-paced, meaningful environment.Your RoleWe are looking for adept and forward-thinking Machine Learning Scientists to enhance our AI team, focusing on applications of AI, particularly in Large Language Models (LLM) and Computer Vision across Cloud, Devices, and Robotics. As an integral part of our research and development initiatives, you will significantly contribute to advancing cutting-edge technologies in LLMs, Multimodal Large Language Models (MLLMs), Computer Vision, and Generative AI for law enforcement applications and more. You will work collaboratively with multidisciplinary teams to innovate, develop, and implement advanced LLM, MLLM, and CV models, enabling intelligent reasoning and perception of multimodal data.
aisi seeks a Research Engineer or Research Scientist to focus on Model Transparency in London, UK. The position involves research aimed at making AI models more understandable and accountable. The goal is to clarify complex behaviors within these systems and help build trust in their outcomes. Key responsibilities Research new ways to improve the interpretability of AI models Create methods and tools that help explain how models make decisions Tackle challenges in understanding and communicating model behavior Contribute to initiatives that support greater accountability in AI Location This role is based in London, UK.
Full-time|Remote|Amsterdam, Netherlands; London, United Kingdom; Remote - Europe
Join Nebius and Shape the Future of AINebius is pioneering a transformative approach to cloud computing, dedicated to empowering the global AI economy. Our mission is to provide the essential tools and resources that enable our clients to tackle real-world challenges and revolutionize industries, all while minimizing infrastructure costs and the necessity of extensive in-house AI/ML teams. Our talented workforce operates at the forefront of AI cloud infrastructure, collaborating with some of the most innovative and experienced leaders and engineers in the industry.Our Work EnvironmentLocated in Amsterdam and publicly listed on Nasdaq, Nebius boasts a global presence with R&D centers across Europe, North America, and Israel. Our diverse team, comprising over 1,400 employees, includes more than 400 highly skilled engineers, equipped with deep expertise in both hardware and software engineering, as well as an in-house AI R&D team.We are on the lookout for a Staff or Principal Applied AI Researcher to join our rapidly expanding team, focused on developing an agent-native search platform—the vital web access layer for AI systems.Unlike traditional search methodologies, we are innovating how AI agents—not humans—access, retrieve, and reason over information available on the internet. As AI increasingly becomes the primary interface for web interaction, this pivotal layer is set to transform the function of conventional search engines.This role involves tackling retrieval and search challenges within entirely new access patterns and scales.Depending on your experience and the responsibilities you take on, this position can be classified at either the Staff or Principal level, granting you ownership over vital aspects of our applied AI research trajectory.You will lead applied research initiatives that directly enhance how AI systems retrieve, ground, and utilize real-world information in production, ensuring that research is closely linked to large-scale deployment.Your ResponsibilitiesEngage in projects at the intersection of search, retrieval, and LLM-based systems, shaping how AI agents engage with the web. This includes designing agent-native retrieval systems (distinct from human search UX), developing systems where LLMs actively query, iterate, and reason over results, and collaborating with cross-functional teams to foster innovation.
About Mentis AIAt Mentis AI, we are pioneering the integration of artificial intelligence with human expertise to redefine industries. Our team is comprised of seasoned professionals from prestigious firms such as Lazard and Partners Group, and we operate across global markets from our key locations in London and San Francisco. We are dedicated to developing AI systems that deeply understand finance, healthcare, and law, collaborating closely with domain experts to enhance AI training.Your RoleThis internship is an exceptional opportunity, allowing you to work alongside our founding team and experienced finance experts. You will engage in meaningful projects that impact the way AI interprets financial markets. Your responsibilities will include:Developing Financial Models: Create DCF valuations, comparable company analyses, LBO models, and credit assessments to serve as training data for AI systems.Conducting In-Depth Financial Analysis: Analyze earnings reports, dissect financial statements, evaluate credit metrics, and produce high-quality research outputs that guide AI models in understanding finance professionals' reasoning.Creating Evaluation Frameworks: Design scoring rubrics and benchmarks to assess the AI’s ability to handle real-world finance scenarios, including merger analysis, debt structuring, and equity valuation.Establishing Annotation Guidelines: Convert complex finance workflows into structured labeling instructions for AI training.Testing AI Outputs: Critically evaluate model-generated financial analyses to identify potential errors in logic and methodology.Collaborating with ML Engineers: Work cross-functionally to refine prompts, enhance data quality, and boost model performance in finance-related tasks.We are a dynamic and agile team, seeking adaptable and enthusiastic individuals eager to contribute to our growth in various capacities.
Location: LondonCompany: H About H H is focused on building agentic AI that automates complex, multi-step tasks typically handled by people. The company’s goal is to help individuals achieve more by creating superintelligent systems that work safely and responsibly. H values openness, continuous learning, and collaboration. Every team member’s input matters here. About the Models Team The Models team develops the core models that power H’s agentic AI technology. This group works on training methods to boost model performance and efficiency, especially for agent-driven applications where inference costs matter. Projects span Large Language Models (LLMs) and Vision-Language Models (VLMs), enabling agents to interpret and interact with complex environments. Team members refine these models using advanced training approaches, including reinforcement learning and reward modeling. The focus is on better instruction following, tool use, and dynamic interaction. The team’s work bridges research and product, turning new research into practical solutions that move AI forward. Who Hires Here H seeks exceptional AI researchers and engineers from around the world who care about advancing technology safely and responsibly. The company welcomes those eager to shape the future of superintelligent AI alongside a collaborative and driven team.
Full-time|$180K/yr - $250K/yr|Hybrid|London, England, United Kingdom; New York, New York, United States; San Francisco, California, United States
About Lightning AIFounded in 2019, Lightning AI is the driving force behind PyTorch Lightning. We create a comprehensive platform for the development, training, and deployment of AI systems, streamlining the journey from innovative research to impactful production.Our merger with Voltage Park, a neocloud and AI factory, enhances our offerings by integrating developer-centric software with efficient, large-scale computing resources. We provide teams with essential tools for experimentation, training, and production inference, ensuring security, observability, and control are integral to our solutions.We cater to individual researchers, startups, and established enterprises, with a global presence including offices in New York City, San Francisco, Seattle, and London. Lightning AI is supported by esteemed investors such as Coatue, Index Ventures, Bain Capital Ventures, and Firstminute.Our ValuesMove Fast: We prioritize speed and precision, deconstructing complex challenges into manageable tasks.Focus: We tackle one objective at a time with care, working collaboratively to deliver features accurately.Balance: We believe in maintaining a healthy work-life balance, promoting sustained performance through rest and recovery.Craftsmanship: We strive for excellence and take pride in the details that contribute to our innovation.Minimal: We embrace simplicity in our innovations, focusing on what truly matters.Role OverviewWe are looking for an accomplished Research Engineer to enhance the optimization of training and inference workloads using compute accelerators and clusters, particularly through the Lightning Thunder compiler and the broader PyTorch Lightning ecosystem. This role exists at the intersection of deep learning research, compiler development, and large-scale system optimization. You will be instrumental in advancing technology that enhances model performance and efficiency, establishing essential software that will influence the entire machine learning landscape.As part of the Engineering Team, you will report to our Tech Lead. This hybrid role is based in our offices located in New York City, San Francisco, or London.
About AnthropicAt Anthropic, we are dedicated to pioneering safe, interpretable, and controllable AI systems. Our goal is to ensure that AI technologies are beneficial for users and society at large. We have assembled a rapidly expanding team of passionate researchers, engineers, policy specialists, and business leaders working collaboratively to create advanced AI systems that serve humanity well.As a leader in AI research, Anthropic is committed to developing ethical, powerful artificial intelligence. We aim to align transformative AI systems with human values. We invite you to join our Pretraining team as a Research Engineer, where you will be instrumental in creating the next generation of large language models. This role allows you to operate at the crossroads of cutting-edge research and practical engineering, playing a key part in building safe, steerable, and trustworthy AI systems.Key Responsibilities:Conduct innovative research and develop solutions in areas such as model architecture, algorithms, data processing, and optimization techniques.Independently lead small-scale research projects while partnering with colleagues on larger initiatives.Design, execute, and analyze scientific experiments to deepen our understanding of large language models.Enhance and scale our training infrastructure to boost efficiency and reliability.Develop and refine development tools to improve team productivity.Contribute across the entire stack, from low-level optimizations to high-level model design.
At Intropic, we are pioneering the realm of financial intelligence where profound market insights converge with cutting-edge AI technology. Established in the vibrant financial hub of Canary Wharf, London, our mission is to convert intricate data into actionable insights with clarity and precision. Our culture is anchored in values of truth-seeking, agility, and accountability, which guide our collaborative efforts and innovation. We thrive in a fast-paced environment, embracing independent thinking while upholding the highest standards of integrity and impactful results. Curiosity is not just welcomed; it is essential. If you are motivated by challenges, sparked by innovation, and eager to elevate your intelligence among a team of exceptional thinkers, then Intropic is the ideal place for your ideas to flourish.
Full-time|On-site|London, United Kingdom, Paris, Paris, France
Join InstaDeep as a Research Scientist where you will drive innovation in decision-making AI. Collaborate with leading experts to develop cutting-edge machine-learning models, conduct impactful experiments, and contribute to groundbreaking research in computational biology. Your insights will help expand our scientific knowledge and support our partnerships with global leaders like BioNTech SE and Google DeepMind.
At Google DeepMind, we celebrate the rich diversity of experiences, knowledge, backgrounds, and perspectives that contribute to our mission of creating extraordinary impact. We are dedicated to providing equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital status, sexual orientation, gender identity, pregnancy, or any other legally protected status. If you require accommodation due to a disability or additional need, please let us know.About UsArtificial Intelligence stands as one of humanity’s most transformative inventions. At Google DeepMind, our team of scientists, engineers, and machine learning experts work collaboratively to push the boundaries of AI technology. We harness our innovations for the greater good and scientific progress, prioritizing safety and ethics in every endeavor.SnapshotWe are in search of talented engineers and scientists to help propel our research in Artificial Intelligence as part of our mission to develop the next generation of Generative AI technologies. Our team is uniquely positioned to work on speculative, long-term projects that, when successful, yield significant advancements in the capabilities of our frontier models. We are committed to exploring paradigm-shifting concepts and investing the necessary time and resources to realize them. Our aim is to ideate, research, develop, validate, and incorporate these advancements into major Google technologies.Our primary research focus is on Gemini Diffusion, our cutting-edge text diffusion model, which we believe has the potential to unlock a new era in generative AI, delivering unmatched capabilities with rapid latency. Backed by GDM leadership, our team is devoted to thoroughly investigating and advancing this groundbreaking technology.We are a multi-disciplinary group of engineers and scientists who collaborate closely, leveraging our diverse skills with a strong emphasis on developing and deploying innovative AI solutions. Our problem-focused ethos drives our interest in any tool or idea that can effectively address real-world challenges.