AI Scientist - Machine Learning Specialist
Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Experience Level
Mid to Senior
Similar jobs
Browse all companies, explore by city & role, or SEO search pages. View directory listings: all jobs, search results, location & role pages.
Join our dynamic AI R&D team as an AI Scientist focused on Machine Learning. In this pivotal role, you will lead the development and implementation of advanced deep learning models to address real-world temporal modeling challenges in the manufacturing sector. We are in search of a candidate with extensive practical R&D experience, firmly rooted in robust th…
At Rhoda AI, we are pioneering the future of humanoid robotics by establishing a comprehensive stack that includes advanced, software-defined hardware along with foundational models and video world models to drive our innovations. Our robots are engineered to be versatile, capable of navigating complex real-world scenarios that extend beyond traditional training environments. Our interdisciplinary research team, featuring experts from prestigious institutions such as Stanford, Berkeley, and Harvard, is at the forefront of large-scale learning, robotics, and systems engineering. With over $400 million raised, we are making significant investments in research and development, hardware innovation, and scaling our manufacturing capabilities to bring our vision to life.We are seeking a motivated Machine Learning Inference Engineer to join our team and contribute to the development and operation of the inference systems that power our automation stack. You will play a crucial role in ensuring the efficient and reliable execution of large foundation models, collaborating closely with our robotic platforms and internal task tools.Key Responsibilities:Develop and maintain infrastructure for model inference across both cloud and on-premises environments.Optimize the latency, throughput, and reliability of deployed machine learning models.Design and scale services for serving diverse foundation models in both research and production contexts.Collaborate with research and robotics teams to enhance inference optimization and integration.Create tools for model deployment, version control, and observability to facilitate rapid iteration cycles.Contribute to the robustness and scalability of the inference stack as model complexity and deployment demands evolve.Qualifications:Minimum of 3 years of experience in machine learning infrastructure, MLOps, or backend systems.Proven experience in deploying and managing machine learning inference workloads in production environments.Excellent knowledge of Kubernetes and containerized deployment pipelines.Familiarity with cloud service providers such as AWS and GCP, including GPU orchestration capabilities.Experience with popular ML frameworks including PyTorch and TensorFlow, as well as model serving tools like Triton, TorchServe, and Ray Serve.Strong debugging capabilities and a proactive ownership mindset, comfortable resolving issues across the technology stack.
About Nightfall:Nightfall is an innovative, AI-driven platform specializing in unified data loss prevention and insider risk management. We secure sensitive data across various environments, including SaaS applications, Generative AI tools, email, and endpoint devices. Trusted by numerous clients, from AI pioneers to Fortune 10 banks, Nightfall empowers organizations to innovate safely, mitigating the risks associated with data loss and intellectual property exposure. Our intelligent platform automates data loss prevention, allowing security teams to focus on strategic initiatives by resolving security violations proactively and providing real-time training to users.Our endeavors are supported by top-tier venture capital firms such as Bain Capital Ventures, Venrock, WestBridge Capital, and Pear VC, alongside cybersecurity leaders including Frederic Kerrest, Maynard Webb, Ryan Carlson, and Kevin Mandia.About the Role:We seek a highly skilled technical leader to join our expanding team at Nightfall. As the Lead AI/ML Data Scientist within the AI Engineering organization, you will be pivotal in developing ML/NLP models and Generative AI solutions that enhance our Data Loss Prevention (DLP) and security products. This role involves spearheading research and applying advanced machine learning techniques to address security challenges, while guiding ML and backend engineers in deploying systems into production. You will also be instrumental in shaping the future architecture of our AI platform.This position is hybrid, requiring three days in the office at our Palo Alto, California location, and represents a fantastic opportunity for those passionate about data science and machine learning engineering.
About VoltaiAt Voltai, we are pioneering the future of artificial intelligence by developing world models and agents capable of learning, evaluating, planning, experimenting, and interacting with the physical world. Our initial focus is on understanding and creating advanced hardware, electronic systems, and semiconductors, utilizing AI to design and innovate beyond human cognitive boundaries.About Our TeamOur remarkable team is backed by esteemed Silicon Valley investors, Stanford University, and industry leaders including CEOs and Presidents of Google, AMD, Broadcom, and Marvell. We boast a diverse group of former Stanford professors, SAIL researchers, Olympiad medalists, CTOs of prominent tech firms, and high-ranking officials with experience in national security and foreign policy.What We Are Looking ForExceptional AI/ML engineering skills, ideally from top-tier programs in Computer Science, Electrical Engineering, Mathematics, or Physics.Demonstrated success in delivering AI/ML projects from initial concept through to production deployment.Hands-on experience in fine-tuning and deploying large language models (LLMs) within production environments.Experience working with multi-modal models that integrate text, image, or audio inputs.Bonus PointsExperience in competitive programming.Contributions to open-source projects.Recognition through awards or publications in leading journals and conferences.Ability to thrive in a dynamic, fast-paced startup environment.
Upwork Inc.
Upwork Inc. connects businesses with skilled professionals in AI, machine learning, software development, sales, marketing, customer support, finance, and accounting. The company’s platforms, including the Upwork Marketplace and Lifted, help organizations of all sizes find and manage freelance, fractional, and payrolled talent for a range of contingent work needs. Upwork supports both large enterprises and entrepreneurs in sourcing talent and implementing AI-driven solutions. The company’s network covers more than 10,000 skills, enabling clients to scale and adapt their workforce for changing business demands. Since launch, Upwork has processed over $30 billion in transactions. The company’s mission centers on expanding opportunities at every stage of work. Learn more Visit the Upwork Marketplace: upwork.com Learn about Lifted: go-lifted.com Connect on LinkedIn, Facebook, Instagram, TikTok, and X Follow Lifted on LinkedIn
At Rhoda AI, we are pioneering the development of a comprehensive full-stack platform for the next generation of humanoid robots. Our innovative approach encompasses high-performance, software-defined hardware along with foundational and video world models that empower our robotic systems. Our robots are engineered as versatile generalists, adept at navigating intricate, real-world scenarios, including those not encountered during training. Collaborating with a distinguished research team from Stanford, Berkeley, Harvard, and other leading institutions, we operate at the forefront of large-scale learning, robotics, and systems engineering. With over $400M in funding, we are aggressively investing in research and development, hardware innovation, and scaling up manufacturing to bring our vision to life.We are on the lookout for a Staff / Principal Machine Learning Engineer to take charge of our training platform. This pivotal system is essential for ensuring that large-scale training is reliable, reproducible, and straightforward to execute. You will play a crucial role in defining the lifecycle of training jobs, including their launch, tracking, recovery, and debugging across our clusters. Your contributions will enable researchers to innovate rapidly without infrastructure hindrances.In this role, you will be at the heart of enhancing research efficiency: when a training job fails, your system will allow for automatic recovery; when experiments become challenging to reproduce, you will implement effective solutions; and when GPU hours are squandered, you will ensure visibility and preventative measures are in place.
About PathwayPathway is revolutionizing artificial intelligence with the introduction of the world’s first post-transformer model that mimics human thought processes. Our innovative architecture surpasses traditional Transformer models, providing enterprises with unparalleled transparency into model operations. By integrating this foundational model with the fastest data processing engine available, Pathway empowers organizations to transcend mere incremental optimization and achieve genuinely contextualized, experience-driven intelligence. Trusted by prestigious clients including NATO, La Poste, and Formula 1 racing teams, we are at the forefront of AI advancements.Led by visionary CEO Zuzanna Stamirowska, a complexity scientist, our team includes AI trailblazers such as CTO Jan Chorowski, who pioneered the application of Attention in speech and collaborated with Nobel laureate Geoff Hinton at Google Brain, and CSO Adrian Kosowski, a distinguished computer scientist and quantum physicist who earned his PhD at just 20 years old.Supported by prominent investors and advisors like Lukasz Kaiser, co-author of the Transformer architecture (the “T” in ChatGPT) and a key researcher in OpenAI's reasoning models, Pathway is headquartered in Palo Alto, California.The OpportunityWe are on the lookout for passionate Machine Learning/AI Software Engineering interns with a solid foundation in machine learning model research.Your ResponsibilitiesAssist in training Large Language Models (LLMs)Conduct benchmarking of LLMsPrepare and evaluate training datasetsCollaborate with the core Pathway Research TeamYour contributions will significantly impact the advancement of the AI landscape.
About UsHippocratic AI stands at the forefront of generative AI in the healthcare sector. Our innovative platform is the only one capable of engaging in safe, autonomous clinical conversations with patients, supported by our proprietary LLMs in the Polaris constellation, boasting an impressive accuracy rate of over 99.9%.Why Join Our TeamRevolutionize healthcare with safety-centric AI. We are pioneering the world's first healthcare-specific, safety-oriented LLM—a groundbreaking platform focused on enhancing patient outcomes on a global scale. This is a unique opportunity to contribute to category creation.Collaborate with visionaries. Co-founded by CEO Munjal Shah alongside a distinguished team of physicians, hospital executives, AI innovators, and researchers from esteemed institutions such as El Camino Health, Johns Hopkins, Washington University in St. Louis, Stanford, Google, Meta, Microsoft, and NVIDIA.Supported by top-tier investors. We recently secured a $126M Series C funding round at a valuation of $3.5B, led by Avenir Growth, bringing our total funding to $404M with contributions from notable investors like CapitalG, General Catalyst, a16z, Kleiner Perkins, and others.Build alongside experts in healthcare and AI. Join a team of professionals dedicated to enhancing care, advancing science, and creating transformative technologies that ensure our platform is robust, reliable, and revolutionary.Location RequirementWe believe collaboration sparks the best ideas. To foster rapid teamwork and a vibrant company culture, this position requires daily presence in our Palo Alto office, five days a week, unless stated otherwise.About the RoleIn healthcare AI, evaluation is crucial—if it can't be measured, it can't be deployed. You will develop systems that assess the safety, accuracy, and readiness of our models for real-world patient interactions: evaluation frameworks, synthetic data pipelines, automated benchmarks, and LLM-as-judge systems. This role presents a high-impact engineering opportunity where your contributions directly influence what is launched into production.What You’ll DoCreate and implement evaluation frameworks focused on LLM safety, clinical accuracy, and conversational quality.Build synthetic data generation pipelines to rigorously test models across varied clinical scenarios.Develop scalable automated and human-in-the-loop evaluation pipelines.
At Rhoda AI, we are pioneering the development of a comprehensive foundation for the next generation of humanoid robots. Our focus spans high-performance, software-defined hardware to advanced foundational models and video world models that govern robot functionality. Our robots are engineered to be versatile, capable of navigating intricate, real-world environments and tackling scenarios not previously encountered in training. We stand at the crossroads of large-scale learning, robotics, and systems, bolstered by a research team comprising experts from prestigious institutions such as Stanford, Berkeley, and Harvard. Our ambition is not merely to add features; we are crafting a revolutionary computing platform for physical tasks, underpinned by over $400 million in funding, driving aggressive investments in research & development, hardware innovation, and scaling up manufacturing to bring our vision to fruition.Role OverviewWe are in search of a Principal Machine Learning Systems Engineer to take charge of our training systems' performance from start to finish. You will be instrumental in defining the scaling of our model training, enhancing efficiency, scalability, and accuracy across extensive multimodal training environments. This is a pivotal systems role, not merely focused on infrastructure support. Your contributions will significantly influence our compute utilization efficiency, scalability of models across thousands of GPUs, and the speed of research iterations.Your ResponsibilitiesOversee training performance from start to finishAnalyze and enhance the performance of large-scale multimodal training encompassing vision, video, proprioception, actions, and language.Create systematic performance attributions by breaking down step-time into compute, communication, and input pipeline, along with scaling curves for various cluster sizes and identifying key bottlenecks.Drive quantifiable improvements across:Distributed efficiency (e.g., communication and compute overlap, bucketization, topology-aware mapping, and parallelism strategies).Compute efficiency (e.g., identifying kernel hotspots, operator fusion, attention optimization, and minimizing framework/runtime overhead).Memory efficiency (e.g., activation checkpointing, sequence packing, and reducing fragmentation).Design training systems rather than just tuning themDefine and refine parallelism strategies including data, tensor, pipeline, sharding, and hybrid approaches.Enhance execution efficiency through communication scheduling, graph capture, execution optimization, and runtime enhancements.Contribute to the overall system architecture with innovative solutions.
At Rhoda AI, we are pioneering the development of a comprehensive platform for the next generation of humanoid robots. Our ambition encompasses everything from high-performance, software-defined hardware to the foundational models and video world models that govern their operations. Our robots are engineered as versatile generalists, adept at navigating intricate, real-world environments and addressing scenarios that are not encountered during training. We operate at the confluence of large-scale learning, robotics, and systems, with a research team that includes esteemed researchers from Stanford, Berkeley, Harvard, and other renowned institutions. Rather than merely enhancing a feature, we are constructing an entirely new computing platform dedicated to physical tasks. With over $400M raised, we are aggressively investing in research and development, hardware innovation, and scaling up manufacturing to realize this vision.Key ResponsibilitiesLead research initiatives focused on foundational models and world models for robotics, including representation learning, dynamics/prediction, planning, and control.Define research challenges and formulate hypotheses rooted in real-world robotic autonomy requirements.Design and execute rigorous experiments at scale, encompassing ablations, benchmarking, and evaluation methodologies.Develop and assess model architectures aimed at enhancing long-horizon predictions, rollout quality, and overall robotic task performance.Investigate and improve pre-training and post-training processes, including fine-tuning, alignment, and evaluation of large multimodal models.Collaborate closely with Research Engineers to translate innovative ideas into scalable training pipelines and dependable systems.Effectively communicate research findings through internal documentation, presentations, and reviews.Publish and present research at prestigious venues.Required QualificationsPh.D. in a relevant discipline such as Machine Learning, Robotics, Computer Science, Electrical Engineering, Applied Mathematics, or Computer Vision.Demonstrated strong publication record in high-quality research (e.g., NeurIPS, ICML, ICLR, CoRL, RSS, ICRA, CVPR).In-depth knowledge of current machine learning techniques, particularly in areas such as:Deep learning and representation learning.Sequence modeling and transformers.Generative modeling (e.g., diffusion, autoregressive, latent-variable models).
About Glean:Founded in 2019, Glean is a pioneering AI-driven knowledge management platform that empowers organizations to efficiently discover, organize, and share vital information across their teams. By seamlessly integrating with tools such as Google Drive, Slack, and Microsoft Teams, Glean enables employees to access critical knowledge precisely when they need it, enhancing productivity and collaboration. Our state-of-the-art AI technology streamlines knowledge discovery, allowing teams to harness their collective intelligence more effectively.Glean was conceived by Founder & CEO Arvind Jain, who recognized the challenges employees face in navigating fragmented knowledge and an overwhelming array of SaaS tools. This insight drove him to create a superior solution—an AI-powered enterprise search platform designed for intuitive and rapid access to information. Since its inception, Glean has evolved into a premier Work AI platform, merging enterprise-grade search, an AI assistant, and robust application and agent-building capabilities to fundamentally transform the way employees engage with their work.About the Role:We are seeking experienced engineers to contribute their expertise and vision in the development of next-generation intelligent enterprise AI assistants and autonomous AI agents. Our mission involves reimagining how LLMs (Large Language Models) and agents can reason, plan, and execute complex, multi-step enterprise workflows. You will operate at the intersection of applied research and production engineering, focusing on areas such as agentic frameworks, LLM orchestration, low-latency LLM inference and optimization, domain-adapted and memory-augmented LLMs, reinforcement learning, and creating evaluation frameworks for intricate enterprise tasks. Our approach emphasizes collaboration with customers to deeply understand their challenges and apply the ideal blend of research-driven and practical engineering solutions to address them.
Gauss Labs is seeking a dynamic and skilled Senior AI Engineer to pioneer transformative Industrial AI solutions, setting new standards for artificial intelligence in the manufacturing sector. Our collaborations with leading manufacturing clients provide unparalleled access to extensive real-time data derived from their operations. Leveraging advanced AI technologies, we are dedicated to creating innovative AI and machine learning solutions that elevate manufacturing to unprecedented heights.In this pivotal role, you will be instrumental in translating groundbreaking AI and machine learning research into resilient, scalable software applications. Your contributions will facilitate the smooth deployment of models in production environments, thereby enhancing the overall success of AI initiatives within the organization. You will collaborate closely with experienced Applied Scientists, Software Engineers, and Program Managers based in both Palo Alto, California, and Seoul, South Korea.
Role Overview Mistral is hiring an Applied AI Forward Deployed Machine Learning Engineer in Palo Alto. This role centers on bringing advanced machine learning solutions into real-world client settings. The work directly shapes client outcomes and business impact. What You Will Do Deploy machine learning models and systems for client projects Work closely with cross-functional teams to understand specific challenges Develop and adapt AI solutions to fit client needs, focusing on efficiency and practical results
Our VisionAt Tinder, we believe that the thrill of meeting new people is one of life’s greatest joys. We are dedicated to nurturing the magic of human connection, engaging tens of millions of users worldwide. With hundreds of millions of downloads, over 2 billion swipes daily, 20 million matches each day, and a presence in more than 190 countries, our influence is vast and continually expanding.Our team collaborates to tackle intricate challenges, blending insights from human relationships, behavioral science, network economics, AI, and machine learning, while prioritizing user safety and cultural sensitivity. We explore the depths of loneliness, love, and connection.Internship DurationThe internship will take place from June 1 to August 28, 2026.Work EnvironmentThis is a hybrid position, requiring in-office collaboration three days a week in our Palo Alto, California office.Role OverviewAs a member of the Tinder ML team, you will play a crucial role in shaping the product experience across diverse domains, including Recommendations, Trust & Safety, Profile Management, Chat, Growth, and Revenue. Our goal is to leverage machine learning to enhance user experiences, build trust, and drive business growth within Tinder's ecosystem. This internship offers a unique opportunity to work alongside experienced engineers to develop and implement machine learning solutions that align with Tinder’s strategic objectives.
Mistral AI
About Mistral AIAt Mistral AI, we harness the transformative power of artificial intelligence to streamline tasks, save valuable time, and foster enhanced creativity and learning. Our innovative technology is crafted to effortlessly integrate into everyday work environments.We are committed to democratizing AI by offering high-performance, optimized, open-source models, products, and solutions. Our extensive AI platform caters to both enterprise and individual needs, featuring products like Le Chat, La Plateforme, Mistral Code, and Mistral Compute—creating cutting-edge intelligence accessible to all users.As a vibrant and collaborative team, we are driven by our passion for AI and its potential to revolutionize society. Our diverse workforce excels in competitive settings and is dedicated to fostering innovation. With teams distributed across France, the USA, the UK, Germany, and Singapore, we pride ourselves on our creativity, humility, and team spirit.Join us in shaping the future of AI at a pioneering company. Together, we can create a lasting impact. Discover more about our culture at https://mistral.ai/careers.Role OverviewAbout the Research Engineering TeamThe Research Engineering team operates across Platform (shared infrastructure & clean coding practices) and Embedded (integrated within research squads). Our engineers have the flexibility to navigate the research↔production spectrum as their interests and needs evolve.As a Machine Learning Research Engineer, you will be responsible for building and optimizing large-scale learning systems that underpin our open-weight models. Collaborating closely with Research Scientists, you may join either:- Platform RE Team: Focus on enhancing our shared training frameworks, data pipelines, and tools utilized across all teams; or- Embedded RE Team: Become part of a research squad (Alignment, Pre-training, Multimodal, etc.) to turn innovative ideas into scalable, repeatable code.Key Responsibilities• Support researchers by managing the complex aspects of large-scale ML pipelines and developing robust tools.• Bridge cutting-edge research with production: integrate checkpoints, optimize evaluations, and create accessible APIs.• Conduct experiments utilizing the latest deep-learning techniques (sparsification on 70B+ models, distributed training across thousands of GPUs).• Design, implement, and benchmark ML algorithms; produce clear and efficient code in Python.• Deliver prototypes that evolve into production-grade components for Le Chat and our enterprise API.
About Glean:Established in 2019, Glean is a pioneering AI-driven knowledge management platform designed to empower organizations to swiftly locate, structure, and disseminate information among their teams. By seamlessly integrating with tools such as Google Drive, Slack, and Microsoft Teams, Glean ensures that employees can access the right knowledge at the right time, enhancing productivity and collaboration. Our state-of-the-art AI technology simplifies knowledge discovery, making it more efficient for teams to harness their collective intelligence.Glean was founded by Arvind Jain, who recognized the challenges employees face in navigating fragmented knowledge and diverse SaaS tools that hinder productivity. With a vision to create a superior solution, he developed an AI-powered enterprise search platform that facilitates quick and intuitive access to essential information. Since then, Glean has transformed into a leading Work AI platform, integrating enterprise-grade search, an AI assistant, and robust application and agent-building capabilities, fundamentally changing the way employees work.About the Role:We are on the lookout for talented Machine Learning Engineers who are eager to engage in both Quality Assurance and traditional ML tasks to aid in the development of our revolutionary Enterprise Brain. The Enterprise Brain team is crafting a suite of proactive AI products aimed at transforming enterprise workflows by identifying and automating tasks for users, thereby unlocking genuine productivity. This initiative is based on a profound understanding of user needs and a sophisticated Enterprise graph. The role will involve leveraging both LLM and advanced ML techniques, orchestrating agents, and employing cutting-edge ranking methods.Your Responsibilities:Tackle challenging ML problems that involve...
About UsOdyssey AI is at the forefront of artificial intelligence innovation, developing groundbreaking general-purpose world models that represent a new paradigm in multimodal intelligence. Our cutting-edge models, including the remarkable Odyssey-2 Pro, are unlocking new possibilities for consumer, enterprise, and intelligence applications.Your RoleWe seek a highly skilled and imaginative researcher who thrives on the challenge of invention. You are driven by curiosity, eager to transform abstract concepts into tangible systems, and adept at integrating research with practical applications. Your ambition to create general-purpose world models capable of real-time generation, comprehension, and interaction is key.ResponsibilitiesPursue innovative research into the foundational aspects of world models, including representation, temporal coherence, causality, control, and long-term imagination.Develop and oversee an independent research agenda, tackling unresolved questions, formulating hypotheses, and creating algorithms that could redefine the capabilities of these models.Investigate novel architectures, learning objectives, and training paradigms that advance beyond current diffusion and transformer methodologies.Create conceptual frameworks and experimental systems to enhance our understanding of model perception and user intent responsiveness within real-world contexts.Collaborate with engineering and infrastructure teams to translate foundational insights into prototypes that showcase new functionalities rather than mere incremental improvements.Contribute to the scientific community by publishing research, mentoring peers, and actively engaging with the broader research ecosystem to influence the evolution of general-purpose world models.
Join us at Simular, where we are at the forefront of developing agentic AI technologies. We invite you to be part of a dynamic team dedicated to pioneering advancements in artificial intelligence that will shape the future of human-agent interactions.Your RoleAs a Research Scientist at Simular, you will:Lead groundbreaking research initiatives in planning, reinforcement learning, multimodal reasoning, grounding, and AI alignment, focusing on critical areas such as reward modeling and AI safety.Design and implement comprehensive experiments encompassing data collection, benchmarking, model training, and evaluation processes.Develop innovative methodologies that enhance AI agent capabilities and reliability, collaborating with top-tier scientists and engineers to achieve significant academic and product outcomes.Work closely with engineering teams to integrate research prototypes into practical applications.Contribute to the AI research community by publishing and presenting findings at prestigious conferences.Engage in hands-on scientific advancement through method design, dataset creation, experimental execution, and benchmarking against state-of-the-art standards.
Grindr LLC
Join us at Grindr as a Staff Machine Learning Engineer in a dynamic hybrid work environment, primarily based in our Palo Alto office. You will be required to work in the office on Tuesdays and Thursdays.Why This Role is Exciting:As a pivotal member of Grindr, you will play a crucial role in our AI-driven transformation. This is your opportunity to leverage advanced machine learning techniques to enhance the way millions in the LGBTQ+ community connect, whether for casual chats, fleeting encounters, or enduring relationships. We are committed to making machine learning a cornerstone of Grindr, and your contributions will leave a lasting impact on our unique global platform.Impact from Day One: Join a focused team at the forefront of machine learning initiatives, where you will engage in significant, innovative projects that lay the groundwork for our long-term ML vision.Transformative Recommendations: Develop systems that connect users to their next meaningful experiences, adapting to a variety of needs and preferences.Insightful Conversations: Utilize Large Language Models (LLMs) to extract insights, enhancing user interactions with precision and creativity.Your Responsibilities:Design and implement scalable recommendation systems to serve millions, ensuring a balance between performance and innovation.Employ cutting-edge LLMs to analyze extensive conversational data and improve user connections.Prototype, refine, and deploy production-ready ML solutions that address real user challenges.Work collaboratively with engineering, data science, and product teams to bring bold ideas to fruition.Explore and implement new AI tools and techniques to keep Grindr’s technology at the forefront.Your Qualifications:A minimum of 7 years of experience in building machine learning systems, particularly in developing systems from the ground up. Experience with recommendation systems is advantageous.Demonstrated ability to deliver scalable solutions, with proficiency in Python and popular machine learning frameworks.A proactive approach to tackling complex challenges with tangible outcomes.Familiarity with data and deployment technologies (e.g., Snowflake, etc.) is beneficial.
At Protegrity, we are at the forefront of data protection innovation, harnessing the power of AI and quantum-resistant cryptography. Our mission is to transform how sensitive data is safeguarded across cloud-native, hybrid, and on-premises environments. Utilizing cutting-edge cryptographic techniques, including tokenization and format-preserving encryption, we ensure that data remains both valuable and secure.Join us in a collaborative environment where your contributions will directly impact our industry. By working with some of the brightest minds, you will help redefine data security in a GenAI era, where data is the ultimate currency. If you're passionate about shaping the future of data protection, then Protegrity is the place for you!
Sign in to browse more jobs
Create account — see all 459 results
Browse all companies, explore by city & role, or SEO search pages. View directory listings: all jobs, search results, or location & role pages.
Join our dynamic AI R&D team as an AI Scientist focused on Machine Learning. In this pivotal role, you will lead the development and implementation of advanced deep learning models to address real-world temporal modeling challenges in the manufacturing sector. We are in search of a candidate with extensive practical R&D experience, firmly rooted in robust th…
At Rhoda AI, we are pioneering the future of humanoid robotics by establishing a comprehensive stack that includes advanced, software-defined hardware along with foundational models and video world models to drive our innovations. Our robots are engineered to be versatile, capable of navigating complex real-world scenarios that extend beyond traditional training environments. Our interdisciplinary research team, featuring experts from prestigious institutions such as Stanford, Berkeley, and Harvard, is at the forefront of large-scale learning, robotics, and systems engineering. With over $400 million raised, we are making significant investments in research and development, hardware innovation, and scaling our manufacturing capabilities to bring our vision to life.We are seeking a motivated Machine Learning Inference Engineer to join our team and contribute to the development and operation of the inference systems that power our automation stack. You will play a crucial role in ensuring the efficient and reliable execution of large foundation models, collaborating closely with our robotic platforms and internal task tools.Key Responsibilities:Develop and maintain infrastructure for model inference across both cloud and on-premises environments.Optimize the latency, throughput, and reliability of deployed machine learning models.Design and scale services for serving diverse foundation models in both research and production contexts.Collaborate with research and robotics teams to enhance inference optimization and integration.Create tools for model deployment, version control, and observability to facilitate rapid iteration cycles.Contribute to the robustness and scalability of the inference stack as model complexity and deployment demands evolve.Qualifications:Minimum of 3 years of experience in machine learning infrastructure, MLOps, or backend systems.Proven experience in deploying and managing machine learning inference workloads in production environments.Excellent knowledge of Kubernetes and containerized deployment pipelines.Familiarity with cloud service providers such as AWS and GCP, including GPU orchestration capabilities.Experience with popular ML frameworks including PyTorch and TensorFlow, as well as model serving tools like Triton, TorchServe, and Ray Serve.Strong debugging capabilities and a proactive ownership mindset, comfortable resolving issues across the technology stack.
About Nightfall:Nightfall is an innovative, AI-driven platform specializing in unified data loss prevention and insider risk management. We secure sensitive data across various environments, including SaaS applications, Generative AI tools, email, and endpoint devices. Trusted by numerous clients, from AI pioneers to Fortune 10 banks, Nightfall empowers organizations to innovate safely, mitigating the risks associated with data loss and intellectual property exposure. Our intelligent platform automates data loss prevention, allowing security teams to focus on strategic initiatives by resolving security violations proactively and providing real-time training to users.Our endeavors are supported by top-tier venture capital firms such as Bain Capital Ventures, Venrock, WestBridge Capital, and Pear VC, alongside cybersecurity leaders including Frederic Kerrest, Maynard Webb, Ryan Carlson, and Kevin Mandia.About the Role:We seek a highly skilled technical leader to join our expanding team at Nightfall. As the Lead AI/ML Data Scientist within the AI Engineering organization, you will be pivotal in developing ML/NLP models and Generative AI solutions that enhance our Data Loss Prevention (DLP) and security products. This role involves spearheading research and applying advanced machine learning techniques to address security challenges, while guiding ML and backend engineers in deploying systems into production. You will also be instrumental in shaping the future architecture of our AI platform.This position is hybrid, requiring three days in the office at our Palo Alto, California location, and represents a fantastic opportunity for those passionate about data science and machine learning engineering.
About VoltaiAt Voltai, we are pioneering the future of artificial intelligence by developing world models and agents capable of learning, evaluating, planning, experimenting, and interacting with the physical world. Our initial focus is on understanding and creating advanced hardware, electronic systems, and semiconductors, utilizing AI to design and innovate beyond human cognitive boundaries.About Our TeamOur remarkable team is backed by esteemed Silicon Valley investors, Stanford University, and industry leaders including CEOs and Presidents of Google, AMD, Broadcom, and Marvell. We boast a diverse group of former Stanford professors, SAIL researchers, Olympiad medalists, CTOs of prominent tech firms, and high-ranking officials with experience in national security and foreign policy.What We Are Looking ForExceptional AI/ML engineering skills, ideally from top-tier programs in Computer Science, Electrical Engineering, Mathematics, or Physics.Demonstrated success in delivering AI/ML projects from initial concept through to production deployment.Hands-on experience in fine-tuning and deploying large language models (LLMs) within production environments.Experience working with multi-modal models that integrate text, image, or audio inputs.Bonus PointsExperience in competitive programming.Contributions to open-source projects.Recognition through awards or publications in leading journals and conferences.Ability to thrive in a dynamic, fast-paced startup environment.
Upwork Inc.
Upwork Inc. connects businesses with skilled professionals in AI, machine learning, software development, sales, marketing, customer support, finance, and accounting. The company’s platforms, including the Upwork Marketplace and Lifted, help organizations of all sizes find and manage freelance, fractional, and payrolled talent for a range of contingent work needs. Upwork supports both large enterprises and entrepreneurs in sourcing talent and implementing AI-driven solutions. The company’s network covers more than 10,000 skills, enabling clients to scale and adapt their workforce for changing business demands. Since launch, Upwork has processed over $30 billion in transactions. The company’s mission centers on expanding opportunities at every stage of work. Learn more Visit the Upwork Marketplace: upwork.com Learn about Lifted: go-lifted.com Connect on LinkedIn, Facebook, Instagram, TikTok, and X Follow Lifted on LinkedIn
At Rhoda AI, we are pioneering the development of a comprehensive full-stack platform for the next generation of humanoid robots. Our innovative approach encompasses high-performance, software-defined hardware along with foundational and video world models that empower our robotic systems. Our robots are engineered as versatile generalists, adept at navigating intricate, real-world scenarios, including those not encountered during training. Collaborating with a distinguished research team from Stanford, Berkeley, Harvard, and other leading institutions, we operate at the forefront of large-scale learning, robotics, and systems engineering. With over $400M in funding, we are aggressively investing in research and development, hardware innovation, and scaling up manufacturing to bring our vision to life.We are on the lookout for a Staff / Principal Machine Learning Engineer to take charge of our training platform. This pivotal system is essential for ensuring that large-scale training is reliable, reproducible, and straightforward to execute. You will play a crucial role in defining the lifecycle of training jobs, including their launch, tracking, recovery, and debugging across our clusters. Your contributions will enable researchers to innovate rapidly without infrastructure hindrances.In this role, you will be at the heart of enhancing research efficiency: when a training job fails, your system will allow for automatic recovery; when experiments become challenging to reproduce, you will implement effective solutions; and when GPU hours are squandered, you will ensure visibility and preventative measures are in place.
About PathwayPathway is revolutionizing artificial intelligence with the introduction of the world’s first post-transformer model that mimics human thought processes. Our innovative architecture surpasses traditional Transformer models, providing enterprises with unparalleled transparency into model operations. By integrating this foundational model with the fastest data processing engine available, Pathway empowers organizations to transcend mere incremental optimization and achieve genuinely contextualized, experience-driven intelligence. Trusted by prestigious clients including NATO, La Poste, and Formula 1 racing teams, we are at the forefront of AI advancements.Led by visionary CEO Zuzanna Stamirowska, a complexity scientist, our team includes AI trailblazers such as CTO Jan Chorowski, who pioneered the application of Attention in speech and collaborated with Nobel laureate Geoff Hinton at Google Brain, and CSO Adrian Kosowski, a distinguished computer scientist and quantum physicist who earned his PhD at just 20 years old.Supported by prominent investors and advisors like Lukasz Kaiser, co-author of the Transformer architecture (the “T” in ChatGPT) and a key researcher in OpenAI's reasoning models, Pathway is headquartered in Palo Alto, California.The OpportunityWe are on the lookout for passionate Machine Learning/AI Software Engineering interns with a solid foundation in machine learning model research.Your ResponsibilitiesAssist in training Large Language Models (LLMs)Conduct benchmarking of LLMsPrepare and evaluate training datasetsCollaborate with the core Pathway Research TeamYour contributions will significantly impact the advancement of the AI landscape.
About UsHippocratic AI stands at the forefront of generative AI in the healthcare sector. Our innovative platform is the only one capable of engaging in safe, autonomous clinical conversations with patients, supported by our proprietary LLMs in the Polaris constellation, boasting an impressive accuracy rate of over 99.9%.Why Join Our TeamRevolutionize healthcare with safety-centric AI. We are pioneering the world's first healthcare-specific, safety-oriented LLM—a groundbreaking platform focused on enhancing patient outcomes on a global scale. This is a unique opportunity to contribute to category creation.Collaborate with visionaries. Co-founded by CEO Munjal Shah alongside a distinguished team of physicians, hospital executives, AI innovators, and researchers from esteemed institutions such as El Camino Health, Johns Hopkins, Washington University in St. Louis, Stanford, Google, Meta, Microsoft, and NVIDIA.Supported by top-tier investors. We recently secured a $126M Series C funding round at a valuation of $3.5B, led by Avenir Growth, bringing our total funding to $404M with contributions from notable investors like CapitalG, General Catalyst, a16z, Kleiner Perkins, and others.Build alongside experts in healthcare and AI. Join a team of professionals dedicated to enhancing care, advancing science, and creating transformative technologies that ensure our platform is robust, reliable, and revolutionary.Location RequirementWe believe collaboration sparks the best ideas. To foster rapid teamwork and a vibrant company culture, this position requires daily presence in our Palo Alto office, five days a week, unless stated otherwise.About the RoleIn healthcare AI, evaluation is crucial—if it can't be measured, it can't be deployed. You will develop systems that assess the safety, accuracy, and readiness of our models for real-world patient interactions: evaluation frameworks, synthetic data pipelines, automated benchmarks, and LLM-as-judge systems. This role presents a high-impact engineering opportunity where your contributions directly influence what is launched into production.What You’ll DoCreate and implement evaluation frameworks focused on LLM safety, clinical accuracy, and conversational quality.Build synthetic data generation pipelines to rigorously test models across varied clinical scenarios.Develop scalable automated and human-in-the-loop evaluation pipelines.
At Rhoda AI, we are pioneering the development of a comprehensive foundation for the next generation of humanoid robots. Our focus spans high-performance, software-defined hardware to advanced foundational models and video world models that govern robot functionality. Our robots are engineered to be versatile, capable of navigating intricate, real-world environments and tackling scenarios not previously encountered in training. We stand at the crossroads of large-scale learning, robotics, and systems, bolstered by a research team comprising experts from prestigious institutions such as Stanford, Berkeley, and Harvard. Our ambition is not merely to add features; we are crafting a revolutionary computing platform for physical tasks, underpinned by over $400 million in funding, driving aggressive investments in research & development, hardware innovation, and scaling up manufacturing to bring our vision to fruition.Role OverviewWe are in search of a Principal Machine Learning Systems Engineer to take charge of our training systems' performance from start to finish. You will be instrumental in defining the scaling of our model training, enhancing efficiency, scalability, and accuracy across extensive multimodal training environments. This is a pivotal systems role, not merely focused on infrastructure support. Your contributions will significantly influence our compute utilization efficiency, scalability of models across thousands of GPUs, and the speed of research iterations.Your ResponsibilitiesOversee training performance from start to finishAnalyze and enhance the performance of large-scale multimodal training encompassing vision, video, proprioception, actions, and language.Create systematic performance attributions by breaking down step-time into compute, communication, and input pipeline, along with scaling curves for various cluster sizes and identifying key bottlenecks.Drive quantifiable improvements across:Distributed efficiency (e.g., communication and compute overlap, bucketization, topology-aware mapping, and parallelism strategies).Compute efficiency (e.g., identifying kernel hotspots, operator fusion, attention optimization, and minimizing framework/runtime overhead).Memory efficiency (e.g., activation checkpointing, sequence packing, and reducing fragmentation).Design training systems rather than just tuning themDefine and refine parallelism strategies including data, tensor, pipeline, sharding, and hybrid approaches.Enhance execution efficiency through communication scheduling, graph capture, execution optimization, and runtime enhancements.Contribute to the overall system architecture with innovative solutions.
At Rhoda AI, we are pioneering the development of a comprehensive platform for the next generation of humanoid robots. Our ambition encompasses everything from high-performance, software-defined hardware to the foundational models and video world models that govern their operations. Our robots are engineered as versatile generalists, adept at navigating intricate, real-world environments and addressing scenarios that are not encountered during training. We operate at the confluence of large-scale learning, robotics, and systems, with a research team that includes esteemed researchers from Stanford, Berkeley, Harvard, and other renowned institutions. Rather than merely enhancing a feature, we are constructing an entirely new computing platform dedicated to physical tasks. With over $400M raised, we are aggressively investing in research and development, hardware innovation, and scaling up manufacturing to realize this vision.Key ResponsibilitiesLead research initiatives focused on foundational models and world models for robotics, including representation learning, dynamics/prediction, planning, and control.Define research challenges and formulate hypotheses rooted in real-world robotic autonomy requirements.Design and execute rigorous experiments at scale, encompassing ablations, benchmarking, and evaluation methodologies.Develop and assess model architectures aimed at enhancing long-horizon predictions, rollout quality, and overall robotic task performance.Investigate and improve pre-training and post-training processes, including fine-tuning, alignment, and evaluation of large multimodal models.Collaborate closely with Research Engineers to translate innovative ideas into scalable training pipelines and dependable systems.Effectively communicate research findings through internal documentation, presentations, and reviews.Publish and present research at prestigious venues.Required QualificationsPh.D. in a relevant discipline such as Machine Learning, Robotics, Computer Science, Electrical Engineering, Applied Mathematics, or Computer Vision.Demonstrated strong publication record in high-quality research (e.g., NeurIPS, ICML, ICLR, CoRL, RSS, ICRA, CVPR).In-depth knowledge of current machine learning techniques, particularly in areas such as:Deep learning and representation learning.Sequence modeling and transformers.Generative modeling (e.g., diffusion, autoregressive, latent-variable models).
About Glean:Founded in 2019, Glean is a pioneering AI-driven knowledge management platform that empowers organizations to efficiently discover, organize, and share vital information across their teams. By seamlessly integrating with tools such as Google Drive, Slack, and Microsoft Teams, Glean enables employees to access critical knowledge precisely when they need it, enhancing productivity and collaboration. Our state-of-the-art AI technology streamlines knowledge discovery, allowing teams to harness their collective intelligence more effectively.Glean was conceived by Founder & CEO Arvind Jain, who recognized the challenges employees face in navigating fragmented knowledge and an overwhelming array of SaaS tools. This insight drove him to create a superior solution—an AI-powered enterprise search platform designed for intuitive and rapid access to information. Since its inception, Glean has evolved into a premier Work AI platform, merging enterprise-grade search, an AI assistant, and robust application and agent-building capabilities to fundamentally transform the way employees engage with their work.About the Role:We are seeking experienced engineers to contribute their expertise and vision in the development of next-generation intelligent enterprise AI assistants and autonomous AI agents. Our mission involves reimagining how LLMs (Large Language Models) and agents can reason, plan, and execute complex, multi-step enterprise workflows. You will operate at the intersection of applied research and production engineering, focusing on areas such as agentic frameworks, LLM orchestration, low-latency LLM inference and optimization, domain-adapted and memory-augmented LLMs, reinforcement learning, and creating evaluation frameworks for intricate enterprise tasks. Our approach emphasizes collaboration with customers to deeply understand their challenges and apply the ideal blend of research-driven and practical engineering solutions to address them.
Gauss Labs is seeking a dynamic and skilled Senior AI Engineer to pioneer transformative Industrial AI solutions, setting new standards for artificial intelligence in the manufacturing sector. Our collaborations with leading manufacturing clients provide unparalleled access to extensive real-time data derived from their operations. Leveraging advanced AI technologies, we are dedicated to creating innovative AI and machine learning solutions that elevate manufacturing to unprecedented heights.In this pivotal role, you will be instrumental in translating groundbreaking AI and machine learning research into resilient, scalable software applications. Your contributions will facilitate the smooth deployment of models in production environments, thereby enhancing the overall success of AI initiatives within the organization. You will collaborate closely with experienced Applied Scientists, Software Engineers, and Program Managers based in both Palo Alto, California, and Seoul, South Korea.
Role Overview Mistral is hiring an Applied AI Forward Deployed Machine Learning Engineer in Palo Alto. This role centers on bringing advanced machine learning solutions into real-world client settings. The work directly shapes client outcomes and business impact. What You Will Do Deploy machine learning models and systems for client projects Work closely with cross-functional teams to understand specific challenges Develop and adapt AI solutions to fit client needs, focusing on efficiency and practical results
Our VisionAt Tinder, we believe that the thrill of meeting new people is one of life’s greatest joys. We are dedicated to nurturing the magic of human connection, engaging tens of millions of users worldwide. With hundreds of millions of downloads, over 2 billion swipes daily, 20 million matches each day, and a presence in more than 190 countries, our influence is vast and continually expanding.Our team collaborates to tackle intricate challenges, blending insights from human relationships, behavioral science, network economics, AI, and machine learning, while prioritizing user safety and cultural sensitivity. We explore the depths of loneliness, love, and connection.Internship DurationThe internship will take place from June 1 to August 28, 2026.Work EnvironmentThis is a hybrid position, requiring in-office collaboration three days a week in our Palo Alto, California office.Role OverviewAs a member of the Tinder ML team, you will play a crucial role in shaping the product experience across diverse domains, including Recommendations, Trust & Safety, Profile Management, Chat, Growth, and Revenue. Our goal is to leverage machine learning to enhance user experiences, build trust, and drive business growth within Tinder's ecosystem. This internship offers a unique opportunity to work alongside experienced engineers to develop and implement machine learning solutions that align with Tinder’s strategic objectives.
Mistral AI
About Mistral AIAt Mistral AI, we harness the transformative power of artificial intelligence to streamline tasks, save valuable time, and foster enhanced creativity and learning. Our innovative technology is crafted to effortlessly integrate into everyday work environments.We are committed to democratizing AI by offering high-performance, optimized, open-source models, products, and solutions. Our extensive AI platform caters to both enterprise and individual needs, featuring products like Le Chat, La Plateforme, Mistral Code, and Mistral Compute—creating cutting-edge intelligence accessible to all users.As a vibrant and collaborative team, we are driven by our passion for AI and its potential to revolutionize society. Our diverse workforce excels in competitive settings and is dedicated to fostering innovation. With teams distributed across France, the USA, the UK, Germany, and Singapore, we pride ourselves on our creativity, humility, and team spirit.Join us in shaping the future of AI at a pioneering company. Together, we can create a lasting impact. Discover more about our culture at https://mistral.ai/careers.Role OverviewAbout the Research Engineering TeamThe Research Engineering team operates across Platform (shared infrastructure & clean coding practices) and Embedded (integrated within research squads). Our engineers have the flexibility to navigate the research↔production spectrum as their interests and needs evolve.As a Machine Learning Research Engineer, you will be responsible for building and optimizing large-scale learning systems that underpin our open-weight models. Collaborating closely with Research Scientists, you may join either:- Platform RE Team: Focus on enhancing our shared training frameworks, data pipelines, and tools utilized across all teams; or- Embedded RE Team: Become part of a research squad (Alignment, Pre-training, Multimodal, etc.) to turn innovative ideas into scalable, repeatable code.Key Responsibilities• Support researchers by managing the complex aspects of large-scale ML pipelines and developing robust tools.• Bridge cutting-edge research with production: integrate checkpoints, optimize evaluations, and create accessible APIs.• Conduct experiments utilizing the latest deep-learning techniques (sparsification on 70B+ models, distributed training across thousands of GPUs).• Design, implement, and benchmark ML algorithms; produce clear and efficient code in Python.• Deliver prototypes that evolve into production-grade components for Le Chat and our enterprise API.
About Glean:Established in 2019, Glean is a pioneering AI-driven knowledge management platform designed to empower organizations to swiftly locate, structure, and disseminate information among their teams. By seamlessly integrating with tools such as Google Drive, Slack, and Microsoft Teams, Glean ensures that employees can access the right knowledge at the right time, enhancing productivity and collaboration. Our state-of-the-art AI technology simplifies knowledge discovery, making it more efficient for teams to harness their collective intelligence.Glean was founded by Arvind Jain, who recognized the challenges employees face in navigating fragmented knowledge and diverse SaaS tools that hinder productivity. With a vision to create a superior solution, he developed an AI-powered enterprise search platform that facilitates quick and intuitive access to essential information. Since then, Glean has transformed into a leading Work AI platform, integrating enterprise-grade search, an AI assistant, and robust application and agent-building capabilities, fundamentally changing the way employees work.About the Role:We are on the lookout for talented Machine Learning Engineers who are eager to engage in both Quality Assurance and traditional ML tasks to aid in the development of our revolutionary Enterprise Brain. The Enterprise Brain team is crafting a suite of proactive AI products aimed at transforming enterprise workflows by identifying and automating tasks for users, thereby unlocking genuine productivity. This initiative is based on a profound understanding of user needs and a sophisticated Enterprise graph. The role will involve leveraging both LLM and advanced ML techniques, orchestrating agents, and employing cutting-edge ranking methods.Your Responsibilities:Tackle challenging ML problems that involve...
About UsOdyssey AI is at the forefront of artificial intelligence innovation, developing groundbreaking general-purpose world models that represent a new paradigm in multimodal intelligence. Our cutting-edge models, including the remarkable Odyssey-2 Pro, are unlocking new possibilities for consumer, enterprise, and intelligence applications.Your RoleWe seek a highly skilled and imaginative researcher who thrives on the challenge of invention. You are driven by curiosity, eager to transform abstract concepts into tangible systems, and adept at integrating research with practical applications. Your ambition to create general-purpose world models capable of real-time generation, comprehension, and interaction is key.ResponsibilitiesPursue innovative research into the foundational aspects of world models, including representation, temporal coherence, causality, control, and long-term imagination.Develop and oversee an independent research agenda, tackling unresolved questions, formulating hypotheses, and creating algorithms that could redefine the capabilities of these models.Investigate novel architectures, learning objectives, and training paradigms that advance beyond current diffusion and transformer methodologies.Create conceptual frameworks and experimental systems to enhance our understanding of model perception and user intent responsiveness within real-world contexts.Collaborate with engineering and infrastructure teams to translate foundational insights into prototypes that showcase new functionalities rather than mere incremental improvements.Contribute to the scientific community by publishing research, mentoring peers, and actively engaging with the broader research ecosystem to influence the evolution of general-purpose world models.
Join us at Simular, where we are at the forefront of developing agentic AI technologies. We invite you to be part of a dynamic team dedicated to pioneering advancements in artificial intelligence that will shape the future of human-agent interactions.Your RoleAs a Research Scientist at Simular, you will:Lead groundbreaking research initiatives in planning, reinforcement learning, multimodal reasoning, grounding, and AI alignment, focusing on critical areas such as reward modeling and AI safety.Design and implement comprehensive experiments encompassing data collection, benchmarking, model training, and evaluation processes.Develop innovative methodologies that enhance AI agent capabilities and reliability, collaborating with top-tier scientists and engineers to achieve significant academic and product outcomes.Work closely with engineering teams to integrate research prototypes into practical applications.Contribute to the AI research community by publishing and presenting findings at prestigious conferences.Engage in hands-on scientific advancement through method design, dataset creation, experimental execution, and benchmarking against state-of-the-art standards.
Grindr LLC
Join us at Grindr as a Staff Machine Learning Engineer in a dynamic hybrid work environment, primarily based in our Palo Alto office. You will be required to work in the office on Tuesdays and Thursdays.Why This Role is Exciting:As a pivotal member of Grindr, you will play a crucial role in our AI-driven transformation. This is your opportunity to leverage advanced machine learning techniques to enhance the way millions in the LGBTQ+ community connect, whether for casual chats, fleeting encounters, or enduring relationships. We are committed to making machine learning a cornerstone of Grindr, and your contributions will leave a lasting impact on our unique global platform.Impact from Day One: Join a focused team at the forefront of machine learning initiatives, where you will engage in significant, innovative projects that lay the groundwork for our long-term ML vision.Transformative Recommendations: Develop systems that connect users to their next meaningful experiences, adapting to a variety of needs and preferences.Insightful Conversations: Utilize Large Language Models (LLMs) to extract insights, enhancing user interactions with precision and creativity.Your Responsibilities:Design and implement scalable recommendation systems to serve millions, ensuring a balance between performance and innovation.Employ cutting-edge LLMs to analyze extensive conversational data and improve user connections.Prototype, refine, and deploy production-ready ML solutions that address real user challenges.Work collaboratively with engineering, data science, and product teams to bring bold ideas to fruition.Explore and implement new AI tools and techniques to keep Grindr’s technology at the forefront.Your Qualifications:A minimum of 7 years of experience in building machine learning systems, particularly in developing systems from the ground up. Experience with recommendation systems is advantageous.Demonstrated ability to deliver scalable solutions, with proficiency in Python and popular machine learning frameworks.A proactive approach to tackling complex challenges with tangible outcomes.Familiarity with data and deployment technologies (e.g., Snowflake, etc.) is beneficial.
At Protegrity, we are at the forefront of data protection innovation, harnessing the power of AI and quantum-resistant cryptography. Our mission is to transform how sensitive data is safeguarded across cloud-native, hybrid, and on-premises environments. Utilizing cutting-edge cryptographic techniques, including tokenization and format-preserving encryption, we ensure that data remains both valuable and secure.Join us in a collaborative environment where your contributions will directly impact our industry. By working with some of the brightest minds, you will help redefine data security in a GenAI era, where data is the ultimate currency. If you're passionate about shaping the future of data protection, then Protegrity is the place for you!
Sign in to browse more jobs
Create account — see all 459 results
Browse all companies, explore by city & role, or SEO search pages. View directory listings: all jobs, search results, or location & role pages.
