Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Experience Level
Entry Level
About the job
About the Internship
Bosch Group is seeking an Automated Driving Machine Learning Intern in Sunnyvale, California. This role offers hands-on experience with real projects in automated driving and machine learning. Interns will apply academic skills to practical challenges in the field.
About the Internship Bosch Group is seeking an Automated Driving Machine Learning Intern in Sunnyvale, California. This role offers hands-on experience with real projects in automated driving and machine learning. Interns will apply academic skills to practical challenges in the field.
42dot is seeking a Senior Machine Learning Platform Engineer to support its work in autonomous driving technology. This position is based in Sunnyvale, United States. Role overview This role focuses on developing machine learning platforms that support autonomous vehicle systems. The work involves designing and building scalable infrastructure to handle complex ML workloads, with a strong emphasis on performance and reliability. What you will do Lead the creation and enhancement of machine learning solutions for autonomous driving applications. Design, implement, and maintain ML platforms to ensure they meet high standards for scalability and reliability. Requirements Extensive experience in building and maintaining machine learning platforms. Background in supporting ML solutions for autonomous vehicle technology or similar fields. Strong skills in designing scalable and high-performance systems.
Join Bosch as an enthusiastic intern and contribute to pioneering advancements in reinforcement learning and simulation for autonomous vehicle planning. This role focuses on innovative research and development of cutting-edge algorithms, conducting experiments, and translating groundbreaking ideas into viable products.Internship Opportunities: Collaborate with a team of skilled researchers and engineers in one of the following domains:GPU-Accelerated Simulation for Reinforcement Learning:Design and improve high-performance, scalable simulation environments specifically for reinforcement learning applications in autonomous driving.ML-Based Planning Models Integration:Create, train, and embed planning models for autonomous driving, utilizing GPU-accelerated simulations to enhance performance in complex driving scenarios.Hybrid Learning Approaches:Innovate and enhance learning methodologies that integrate imitation and reinforcement learning, emphasizing multi-agent self-play techniques.Key Responsibilities:Engage in transformative engineering projects that apply deep learning and reinforcement learning to resolve challenges in autonomous driving planning and simulation.Collaborate with an international team of experts to implement advanced research results into Bosch's business units, testing and validating concepts in simulated environments and with self-driving vehicles.Work alongside domain specialists to explore novel learning-based planning and decision-making strategies.Conduct benchmarking and validation of models using extensive datasets and simulations.Share research outcomes through comprehensive internal reports and potential external publications.
About the Institute of Foundation ModelsWe are an innovative research laboratory focused on the creation, comprehension, application, and risk management of foundation models. Our mission is to propel research forward, cultivate the next generation of AI innovators, and contribute significantly to a knowledge-driven economy.Joining our team presents a unique opportunity to engage in the core of advanced foundation model training, collaborating with leading researchers, data scientists, and engineers as we address the most pivotal and influential challenges in AI advancement. Your work will involve the creation of groundbreaking AI solutions with the potential to revolutionize entire industries. Employing strategic and innovative problem-solving skills will be crucial in establishing MBZUAI as a premier global center for high-performance computing in deep learning, fostering remarkable discoveries that inspire future AI trailblazers.
Join us at Meshy as a Machine Learning Systems Intern, where your passion for AI, graphics, and innovative product development will thrive in a collaborative environment.What We're Looking For:Commit to a full-time internship for a minimum of 12 weeks.Aiming to transition to a full-time role at Meshy post-graduation (ideal candidates graduating between September 2026 and September 2027 are preferred).Open to candidates pursuing undergraduate, master's, or PhD degrees.A solid foundation in technical skills, coupled with a drive for innovation and a willingness to tackle challenges.Your RoleAs a key contributor to our team, you will assist in developing the most extensive end-to-end 3D native machine learning systems. This role encompasses the entire ML framework, from pretraining to fine-tuning and inference. We seek individuals with robust hands-on engineering capabilities, a thirst for knowledge, and the ability to excel in a dynamic, ownership-driven setting.About UsAt Meshy, we envision a world where 3D creation knows no limits. Our mission is to unleash creativity by offering a comprehensive 3D content pipeline, which includes transforming text and images into 3D models, texturing, editing, and animation rigging. We have cultivated a thriving community for creators, providing a platform to share work, draw inspiration, and utilize assets across projects. Recognized as the leader in 3D generative AI (top-ranked in the 2024 A16Z Games survey), our technology is embraced by industry giants like Meta, Square Enix, and Deepmind, impacting sectors like gaming, film, 3D printing, and robotics.Your Next Challenge3D is at the forefront of Generative AI, presenting unique challenges in training and inference. Your journey with Meshy will involve a full stack of AI responsibilities, including debugging and monitoring hardware platforms, creating training frameworks, scaling high-throughput 3D data pipelines, and collaborating on innovative model architectures with our research team.
About the Institute of Foundation ModelsWe are a pioneering research laboratory focused on the development, understanding, application, and risk management of foundational models. Our mission is to propel research forward, cultivate the next generation of AI innovators, and make substantial contributions to a knowledge-driven economy.Join us and collaborate with top-tier researchers, data scientists, and engineers on the forefront of foundational model training. Engage in solving critical challenges that can redefine entire sectors through advanced AI solutions. Your strategic and innovative problem-solving skills will play a vital role in positioning MBZUAI as an international leader in high-performance computing for deep learning, facilitating discoveries that will inspire future AI trailblazers.The Role We are seeking a skilled distributed ML infrastructure engineer to enhance and expand our training systems. You will collaborate closely with distinguished researchers and engineers to:• Develop and scale distributed training frameworks (e.g., DeepSpeed, FSDP, FairScale, Horovod)• Implement distributed optimizers based on mathematical specifications• Create robust configuration and launching systems across multi-node, multi-GPU clusters• Manage experiment tracking, metrics logging, and job monitoring for enhanced external visibility• Enhance the reliability, maintainability, and performance of training systems• While much of your work will support large-scale pre-training, prior pre-training experience is not mandatory; strong infrastructure and systems expertise are our primary focus.Key Responsibilities • Distributed Framework Ownership – Extend or adapt training frameworks (e.g., DeepSpeed, FSDP) to accommodate new applications and architectures.• Optimizer Implementation – Convert mathematical optimizer specifications into distributed implementations.• Launch Config & Debugging – Develop and troubleshoot multi-node launch scripts with adaptable batch sizes and parallelism strategies.
Cerebras Systems is at the forefront of AI technology, having developed the world's largest AI chip, which is 56 times larger than traditional GPUs. Our innovative wafer-scale architecture delivers the computational power equivalent to dozens of GPUs on a single chip while maintaining the programming simplicity of a single device. This unique approach enables Cerebras to provide unparalleled training and inference speeds, allowing machine learning practitioners to seamlessly run large-scale ML applications without the complexities of managing numerous GPUs or TPUs. Cerebras proudly serves a diverse clientele, including leading model labs, global enterprises, and pioneering AI-native startups. Notably, OpenAI has recently formed a multi-year partnership with Cerebras to harness 750 megawatts of scale, revolutionizing key workloads with ultra high-speed inference. Our groundbreaking wafer-scale architecture ensures that Cerebras Inference stands as the world's fastest solution for Generative AI inference, achieving speeds over ten times faster than GPU-based hyperscale cloud inference services. This remarkable increase in speed is transforming the user experience of AI applications, enabling real-time iterations and enhancing intelligence through additional agentic computation.About The RoleCerebras is expanding its Machine Learning team to spearhead a new initiative that aligns with our existing teams. We are seeking a Principal Investigator to collaborate with our ML leaders in shaping this new effort while building the team and enhancing our capabilities. This new team will work in concert with our current ML divisions: Field ML, which directly engages with customers, Applied ML, which develops new ML capabilities and applications, and Core ML, which adapts ML algorithms to leverage the unique features of Cerebras hardware. The new team may undertake similar or complementary responsibilities.The new team will focus on areas such as:Post-training and reinforcement learning: Enhancing model deployment quality through advanced training, tuning, and reinforcement learning techniques, concentrating on specific downstream tasks;Dataset curation and optimization: Implementing strategies to gather and select high-quality data, facilitating quicker and higher-quality model training and tuning;LLM Pretraining: Engaging in...
Illumio builds technology to contain ransomware and security breaches, helping organizations defend against cyber threats. The Illumio AI Security Graph underpins a platform that spots and contains threats in hybrid multi-cloud setups, aiming to stop attacks before they spread. Illumio is recognized as a leader in microsegmentation and supports Zero Trust architectures for critical infrastructure. The engineering team focuses on advancing cybersecurity through leadership, autonomy, and a strong sense of ownership. Engineers here develop and maintain a scalable SaaS platform using cloud-native tools, with deployments in both cloud and on-premises environments. Precision, quality, and collaboration shape the team's work, and engineers are encouraged to take initiative at every level. This Senior Machine Learning Engineer role is based onsite at Illumio’s Sunnyvale headquarters. The position centers on designing and scaling systems that power Illumio’s AI-driven security platform. Work involves handling large-scale data, distributed systems, and building advanced AI agents. Key Responsibilities Design and optimize high-throughput, event-driven systems with Apache Kafka to support real-time data flows. Develop and maintain large-scale data pipelines using Apache Spark or Flink for high-volume analytics and AI features. Create advanced AI agents that handle autonomous planning, memory management, and reliable tool use in distributed environments. Lead architectural design for containerized services on Kubernetes, focusing on availability and scalability across cloud platforms such as AWS, Azure, and GCP.
Full-time|$126K/yr - $423K/yr|On-site|Sunnyvale, California, United States
About Applied IntuitionApplied Intuition, Inc. is at the forefront of advancing physical AI technologies. Established in 2017 and currently valued at $15 billion, this Silicon Valley powerhouse is dedicated to developing the essential digital infrastructure that will empower intelligent operations in every vehicle and machine worldwide. Our innovative solutions cater to the automotive, defense, trucking, construction, mining, and agriculture sectors, focusing on three pivotal areas: tools and infrastructure, operating systems, and autonomous capabilities. Our reputation is underscored by the trust placed in us by 18 of the top 20 global automakers and the United States military, among others. Our headquarters is in Sunnyvale, California, with additional offices in Washington, D.C.; San Diego; Ft. Walton Beach, Florida; Ann Arbor, Michigan; London; Stuttgart; Munich; Stockholm; Bangalore; Seoul; and Tokyo. Discover more at applied.co.We promote a collaborative in-office culture and expect our team members to primarily work from their Applied Intuition office five days a week. However, we also value flexibility, allowing our employees to manage their schedules responsibly, which may include occasional remote work, starting the day with morning meetings from home, or leaving early to meet family obligations.About the Role and TeamWe are seeking enthusiastic Research Scientists to join our dynamic Research Group at Applied Intuition. Our mission is to develop pioneering technologies that drive the evolution of physical AI, particularly in two transformative applications: end-to-end autonomous driving and general-purpose robotics. Our team comprises distinguished experts from leading institutions and companies, celebrated for their remarkable contributions to both academia and industry, including eight Best Paper awards at prestigious conferences such as CVPR and ICRA. Learn more about our research initiatives at appliedintuition.com/research.With access to industry-leading tools and infrastructure, our researchers can leverage millions of miles of data from extensive fleets and implement their innovative methods across diverse autonomous and robotic systems, including self-driving vehicles and autonomous machinery.
About the Institute of Foundation ModelsWe are a pioneering research lab focused on the development, understanding, application, and risk management of foundation models. Our mission is to propel research forward, cultivate the next generation of AI innovators, and make significant contributions to a knowledge-driven economy.Join our dynamic team and engage in the heart of innovative foundation model training, collaborating with top-tier researchers, data scientists, and engineers. Tackle groundbreaking challenges in AI development and contribute to transformative AI solutions that have the potential to revolutionize industries. Your strategic and innovative problem-solving skills will be vital in establishing MBZUAI as a global center for high-performance computing in deep learning, enabling impactful discoveries that inspire the future of AI innovation.Role OverviewDevelop and Enhance Distributed Pre-Training Frameworks· Implement DeepSpeed / FSDP / Megatron-LM on multi-node GPU clusters.· Design robust launch scripts, resilient checkpoints, and job monitoring systems (e.g., NCCL/GLOO/GPU).Transform Mathematical Concepts into High-Performance Production Code· Prototype novel optimizers or attention mechanisms using PyTorch/NumPy/JAX or similar frameworks.· Convert prototypes into efficient CUDA/Triton kernels with custom gradients and performance tests.Enhance Training Efficiency and Stability· Lead efforts in mixed-precision training, integrating bf16, fp8, etc., into regular workflows while assessing accuracy versus speed improvements and analyzing numerical stability.· Utilize kernel fusion, communication tuning, and memory optimization to achieve state-of-the-art throughput.Accelerate Research Progress· Develop logging and metrics systems, along with experiment-tracking tools, to facilitate rapid iteration.· Design ablation studies and statistical tests that validate or challenge new concepts.· Guide interns and junior engineers through clear asynchronous design documentation and code reviews.You will collaborate closely with researchers, deliver production code, and shape the landscape of large language models.
Full-time|$204K/yr - $343K/yr|On-site|Sunnyvale, California, United States
About Applied IntuitionApplied Intuition, Inc. is at the forefront of advancing physical AI technologies. Established in 2017 and currently valued at $15 billion, our Silicon Valley-based company is dedicated to creating the essential digital infrastructure that empowers intelligence across all moving machines globally. Our solutions serve critical sectors including automotive, defense, trucking, construction, mining, and agriculture, focusing on three main domains: tools and infrastructure, operating systems, and autonomy. Trusted by 18 of the top 20 global automakers, as well as the U.S. military and its allies, Applied Intuition is committed to delivering unparalleled physical intelligence solutions. Our headquarters is located in Sunnyvale, California, complemented by offices in Washington, D.C.; San Diego; Ft. Walton Beach, Florida; Ann Arbor, Michigan; London; Stuttgart; Munich; Stockholm; Bangalore; Seoul; and Tokyo. For more information, visit applied.co.We prioritize in-office collaboration and expect our employees to work primarily from our Applied Intuition office five days a week. However, we value flexibility and trust our team members to manage their schedules responsibly. This may include occasional remote work, starting the day with morning meetings from home before heading to the office, or leaving early when necessary to accommodate personal commitments.About the RoleAs an Engineering Manager on our Machine Learning Platform team, you will lead an exceptional group of engineers dedicated to building the infrastructure that enables Physical AI at scale. Your team will oversee three pivotal areas: Training & Inference Orchestration, where we develop frameworks to efficiently schedule and execute extensive tasks across thousands of GPUs; GPU Cluster Architecture, where we design and expand what will become the industry's largest GPU cluster for Physical AI; and Performance Optimization, where we maximize hardware utilization, throughput, and cost efficiency for large-scale training and inference workloads. You will collaborate at the intersection of systems engineering and machine learning, working directly with stack development and research teams to eliminate bottlenecks and expedite the transition from experimentation to production.
Full-time|$125K/yr - $222K/yr|On-site|Sunnyvale, California, United States
About Applied IntuitionApplied Intuition, Inc. is at the forefront of advancing physical AI technology. Established in 2017 and currently valued at $15 billion, this Silicon Valley powerhouse is creating the essential digital framework that will infuse intelligence into every moving machine worldwide. Our solutions serve key sectors including automotive, defense, trucking, construction, mining, and agriculture, focusing on three main areas: tools and infrastructure, operating systems, and autonomy. Our trusted solutions are utilized by 18 of the top 20 global automakers and the United States military along with its allies. Our headquarters is located in Sunnyvale, California, with additional offices in Washington, D.C., San Diego, Ft. Walton Beach, Florida, Ann Arbor, Michigan, London, Stuttgart, Munich, Stockholm, Bangalore, Seoul, and Tokyo. Find out more at applied.co.We prioritize in-office collaboration, expecting our employees to work from their Applied Intuition office five days a week, while also embracing flexibility. Employees are trusted to manage their schedules, which may include occasional remote work, starting the day with morning meetings from home, or leaving early for family commitments.Hear from an Engineer:About the roleWe are searching for a talented software engineer specializing in perception for autonomous vehicles or mobile robotics. Your role will involve enhancing perception modules within our autonomous vehicle framework, including the development of 4D world representations to facilitate seamless autonomy. You will also lead the design and implementation of computer vision and machine learning strategies that empower self-driving vehicles to navigate effectively.In this dynamic and customer-centric team environment, you will not only contribute your engineering skills but also gain insights into best practices within the evolving autonomy sector. Our fast-paced culture encourages innovation and collaboration.
At Cerebras Systems, we are at the forefront of AI technology, developing the world's largest AI chip that is 56 times larger than conventional GPUs. Our innovative wafer-scale architecture enables the computational power of dozens of GPUs on a single chip, simplifying programming to the ease of handling one device. This unique design allows us to achieve unparalleled training and inference speeds, empowering machine learning practitioners to seamlessly deploy large-scale ML applications without the complexity of managing numerous GPUs or TPUs.Our clientele includes leading model labs, global enterprises, and pioneering AI-native startups. Recently, OpenAI announced a multi-year partnership with Cerebras aimed at leveraging 750 megawatts of scale to revolutionize critical workloads through ultra-high-speed inference.Thanks to our groundbreaking wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, exceeding GPU-based hyperscale cloud inference services by over tenfold. This significant boost in speed is transforming the user experience of AI applications, facilitating real-time iteration and enhancing intelligence through added agentic computation.About The RoleAs an Applied Machine Learning Research Scientist at Cerebras, you will be instrumental in converting modern machine learning methodologies into scalable, high-performance systems. This position focuses on the intersection of modeling and systems, emphasizing the efficient execution of existing algorithms rather than merely publishing new ones. Your efforts will significantly influence the training, optimization, and deployment of large language models (LLMs) on one of the most sophisticated AI platforms in existence.You will collaborate closely with fellow researchers and senior engineers to enhance workflows for LLM pretraining, fine-tuning, and reinforcement learning-based post-training. Your responsibilities will encompass building training pipelines, debugging complex system behaviors, improving model quality, and refining data and evaluation strategies. Your contributions will have a direct and meaningful impact on advancing our capabilities in AI.
We are seeking a dynamic and experienced Manager for our AI and Machine Learning team at LinkedIn. In this role, you will lead a talented group of engineers and data scientists dedicated to developing cutting-edge solutions that enhance user experience and drive engagement across the platform. Your leadership will be crucial in shaping the direction of our AI initiatives, ensuring they align with our mission to connect the world's professionals.The ideal candidate will possess a strong background in machine learning algorithms, data analysis, and software development, as well as exceptional communication skills to effectively collaborate with cross-functional teams. If you are passionate about leveraging AI to create impactful solutions, we want to hear from you!
Full-time|On-site|Sunnyvale, CA; San Francisco, CA; Seattle, WA
Join our ETA Team at DoorDash as a Machine Learning Engineer. In this dynamic role, you will harness the power of machine learning to drive innovation and optimize our delivery processes. You will collaborate with cross-functional teams to develop and deploy scalable machine learning models that enhance customer experience and operational efficiency.
Full-time|$155K/yr - $200K/yr|On-site|Sunnyvale, CA
At Bee Genius, we are pioneering the future of work today with innovative AI solutions that transform industries.Job Overview: We are looking for a talented AI/Machine Learning Engineer to become a vital part of our dynamic team. In this role, you will leverage your expertise to develop and deploy cutting-edge machine learning models and algorithms aimed at addressing complex business challenges.Key Responsibilities:Design, build, and refine machine learning models and algorithms.Train and assess models using extensive datasets.Optimize models for enhanced performance and accuracy.Collaborate with data scientists and software engineers to integrate models into operational systems.Stay abreast of the latest trends in AI and machine learning technologies.Promote the ethical deployment of AI solutions.
Full-time|$159.1K/yr - $199.3K/yr|On-site|Sunnyvale, California, United States
About Applied IntuitionApplied Intuition, Inc. is at the forefront of advancing physical AI technology. Established in 2017 and currently valued at $15 billion, this Silicon Valley-based company is building the essential digital infrastructure to infuse intelligence into every moving machine worldwide. We cater to industries such as automotive, defense, trucking, construction, mining, and agriculture through three primary sectors: tools and infrastructure, operating systems, and autonomy. Our solutions are trusted by 18 of the top 20 global automakers, along with the United States military and its allies, to deliver exceptional physical intelligence. Our headquarters is located in Sunnyvale, California, with additional offices across Washington, D.C.; San Diego; Ft. Walton Beach, Florida; Ann Arbor, Michigan; London; Stuttgart; Munich; Stockholm; Bangalore; Seoul; and Tokyo. Discover more at applied.co.We are an in-office company, expecting our employees to primarily work from their Applied Intuition office five days a week. We understand the importance of flexibility and trust our employees to manage their schedules responsibly. This may include occasional remote work, starting the day with morning meetings from home before heading to the office, or leaving earlier when needed to accommodate family commitments.About the RoleWe are in search of a skilled software engineer with extensive experience in optimizing machine learning models and deploying them in production-grade embedded runtime environments. Your expertise will span the entire ML framework stack, including PyTorch, JAX, ONNX, TensorRT, CUDA, XLA, and Triton.At Applied Intuition, You Will:Lead ML performance optimization across various technologies for both on-road and off-road ADAS/AD stacks aimed at deployment on a range of embedded computing platforms.Devise compute usage strategies to enhance efficiency and minimize latency of model inference for compute boards chosen by our customers.Engage in model pruning and quantization, ensuring successful deployment on memory-constrained platforms.Collaborate closely with ML engineers and software developers to identify and optimize efficient model architecture solutions.Establish methodologies to...
Join our innovative team at Wayve as a Machine Learning Engineer specializing in Application Software. In this pivotal role, you will leverage your expertise in machine learning algorithms and software development to create cutting-edge applications that drive our technology forward. Collaborate with a diverse group of talented professionals to enhance our products and deliver exceptional solutions that meet our clients' needs.
Cerebras Systems is at the forefront of AI innovation, creating the world’s largest AI chip, which is 56 times larger than traditional GPUs. Our groundbreaking wafer-scale architecture delivers the computational power equivalent to dozens of GPUs on a single chip, combined with the programming simplicity of a unified device. This innovative approach allows us to offer unparalleled training and inference speeds, enabling machine learning practitioners to execute extensive ML applications seamlessly, without the complexities of managing multiple GPUs or TPUs.Cerebras boasts an impressive clientele, including premier model labs, global corporations, and pioneering AI startups. Recently, OpenAI announced a multi-year partnership with Cerebras, aimed at deploying 750 megawatts of scale, revolutionizing critical workloads with ultra-fast inference capabilities.Our unique wafer-scale architecture enables Cerebras Inference to provide the fastest Generative AI inference solution globally, surpassing GPU-based hyperscale cloud inference services by more than tenfold. This remarkable enhancement in speed is reshaping the AI application user experience, facilitating real-time iteration and boosting intelligence through enhanced computational capabilities.About The RoleThe Inference ML Engineering team at Cerebras Systems is committed to empowering our rapid generative inference solution through intuitive APIs, supported by a distributed runtime that operates on extensive clusters of our proprietary hardware. Our goal is to enable enterprises, developers, and researchers to fully harness the capabilities of our platform, leveraging its exceptional performance, scalability, and flexibility. The team collaborates closely with cross-functional groups, including compiler developers, cluster orchestrators, ML scientists, cloud architects, and product teams, to deliver impactful solutions that redefine the limits of ML performance and usability.As a Senior Software Engineer on the Inference ML Engineering team, you will be instrumental in designing and implementing APIs, ML features, and tools that facilitate the execution of state-of-the-art generative AI models on our custom hardware. Your role will involve architecting solutions that allow for seamless model translation and execution, ensuring high throughput and minimal latency while maintaining user-friendliness. You will lead technical initiatives and collaborate with other engineering teams to enhance our solutions.
Join our innovative team at intuitive as a Machine Learning Engineer, where you'll have the chance to work on cutting-edge AI technologies that are shaping the future. In this role, you will design, develop, and implement machine learning models that will drive impactful solutions across various sectors.As a critical member of our team, you will collaborate with data scientists and engineers to enhance our product offerings, ensuring they are not only effective but also scalable. This is an exceptional opportunity for those eager to leverage their skills in a thriving environment.
About the Internship Bosch Group is seeking an Automated Driving Machine Learning Intern in Sunnyvale, California. This role offers hands-on experience with real projects in automated driving and machine learning. Interns will apply academic skills to practical challenges in the field.
42dot is seeking a Senior Machine Learning Platform Engineer to support its work in autonomous driving technology. This position is based in Sunnyvale, United States. Role overview This role focuses on developing machine learning platforms that support autonomous vehicle systems. The work involves designing and building scalable infrastructure to handle complex ML workloads, with a strong emphasis on performance and reliability. What you will do Lead the creation and enhancement of machine learning solutions for autonomous driving applications. Design, implement, and maintain ML platforms to ensure they meet high standards for scalability and reliability. Requirements Extensive experience in building and maintaining machine learning platforms. Background in supporting ML solutions for autonomous vehicle technology or similar fields. Strong skills in designing scalable and high-performance systems.
Join Bosch as an enthusiastic intern and contribute to pioneering advancements in reinforcement learning and simulation for autonomous vehicle planning. This role focuses on innovative research and development of cutting-edge algorithms, conducting experiments, and translating groundbreaking ideas into viable products.Internship Opportunities: Collaborate with a team of skilled researchers and engineers in one of the following domains:GPU-Accelerated Simulation for Reinforcement Learning:Design and improve high-performance, scalable simulation environments specifically for reinforcement learning applications in autonomous driving.ML-Based Planning Models Integration:Create, train, and embed planning models for autonomous driving, utilizing GPU-accelerated simulations to enhance performance in complex driving scenarios.Hybrid Learning Approaches:Innovate and enhance learning methodologies that integrate imitation and reinforcement learning, emphasizing multi-agent self-play techniques.Key Responsibilities:Engage in transformative engineering projects that apply deep learning and reinforcement learning to resolve challenges in autonomous driving planning and simulation.Collaborate with an international team of experts to implement advanced research results into Bosch's business units, testing and validating concepts in simulated environments and with self-driving vehicles.Work alongside domain specialists to explore novel learning-based planning and decision-making strategies.Conduct benchmarking and validation of models using extensive datasets and simulations.Share research outcomes through comprehensive internal reports and potential external publications.
About the Institute of Foundation ModelsWe are an innovative research laboratory focused on the creation, comprehension, application, and risk management of foundation models. Our mission is to propel research forward, cultivate the next generation of AI innovators, and contribute significantly to a knowledge-driven economy.Joining our team presents a unique opportunity to engage in the core of advanced foundation model training, collaborating with leading researchers, data scientists, and engineers as we address the most pivotal and influential challenges in AI advancement. Your work will involve the creation of groundbreaking AI solutions with the potential to revolutionize entire industries. Employing strategic and innovative problem-solving skills will be crucial in establishing MBZUAI as a premier global center for high-performance computing in deep learning, fostering remarkable discoveries that inspire future AI trailblazers.
Join us at Meshy as a Machine Learning Systems Intern, where your passion for AI, graphics, and innovative product development will thrive in a collaborative environment.What We're Looking For:Commit to a full-time internship for a minimum of 12 weeks.Aiming to transition to a full-time role at Meshy post-graduation (ideal candidates graduating between September 2026 and September 2027 are preferred).Open to candidates pursuing undergraduate, master's, or PhD degrees.A solid foundation in technical skills, coupled with a drive for innovation and a willingness to tackle challenges.Your RoleAs a key contributor to our team, you will assist in developing the most extensive end-to-end 3D native machine learning systems. This role encompasses the entire ML framework, from pretraining to fine-tuning and inference. We seek individuals with robust hands-on engineering capabilities, a thirst for knowledge, and the ability to excel in a dynamic, ownership-driven setting.About UsAt Meshy, we envision a world where 3D creation knows no limits. Our mission is to unleash creativity by offering a comprehensive 3D content pipeline, which includes transforming text and images into 3D models, texturing, editing, and animation rigging. We have cultivated a thriving community for creators, providing a platform to share work, draw inspiration, and utilize assets across projects. Recognized as the leader in 3D generative AI (top-ranked in the 2024 A16Z Games survey), our technology is embraced by industry giants like Meta, Square Enix, and Deepmind, impacting sectors like gaming, film, 3D printing, and robotics.Your Next Challenge3D is at the forefront of Generative AI, presenting unique challenges in training and inference. Your journey with Meshy will involve a full stack of AI responsibilities, including debugging and monitoring hardware platforms, creating training frameworks, scaling high-throughput 3D data pipelines, and collaborating on innovative model architectures with our research team.
About the Institute of Foundation ModelsWe are a pioneering research laboratory focused on the development, understanding, application, and risk management of foundational models. Our mission is to propel research forward, cultivate the next generation of AI innovators, and make substantial contributions to a knowledge-driven economy.Join us and collaborate with top-tier researchers, data scientists, and engineers on the forefront of foundational model training. Engage in solving critical challenges that can redefine entire sectors through advanced AI solutions. Your strategic and innovative problem-solving skills will play a vital role in positioning MBZUAI as an international leader in high-performance computing for deep learning, facilitating discoveries that will inspire future AI trailblazers.The Role We are seeking a skilled distributed ML infrastructure engineer to enhance and expand our training systems. You will collaborate closely with distinguished researchers and engineers to:• Develop and scale distributed training frameworks (e.g., DeepSpeed, FSDP, FairScale, Horovod)• Implement distributed optimizers based on mathematical specifications• Create robust configuration and launching systems across multi-node, multi-GPU clusters• Manage experiment tracking, metrics logging, and job monitoring for enhanced external visibility• Enhance the reliability, maintainability, and performance of training systems• While much of your work will support large-scale pre-training, prior pre-training experience is not mandatory; strong infrastructure and systems expertise are our primary focus.Key Responsibilities • Distributed Framework Ownership – Extend or adapt training frameworks (e.g., DeepSpeed, FSDP) to accommodate new applications and architectures.• Optimizer Implementation – Convert mathematical optimizer specifications into distributed implementations.• Launch Config & Debugging – Develop and troubleshoot multi-node launch scripts with adaptable batch sizes and parallelism strategies.
Cerebras Systems is at the forefront of AI technology, having developed the world's largest AI chip, which is 56 times larger than traditional GPUs. Our innovative wafer-scale architecture delivers the computational power equivalent to dozens of GPUs on a single chip while maintaining the programming simplicity of a single device. This unique approach enables Cerebras to provide unparalleled training and inference speeds, allowing machine learning practitioners to seamlessly run large-scale ML applications without the complexities of managing numerous GPUs or TPUs. Cerebras proudly serves a diverse clientele, including leading model labs, global enterprises, and pioneering AI-native startups. Notably, OpenAI has recently formed a multi-year partnership with Cerebras to harness 750 megawatts of scale, revolutionizing key workloads with ultra high-speed inference. Our groundbreaking wafer-scale architecture ensures that Cerebras Inference stands as the world's fastest solution for Generative AI inference, achieving speeds over ten times faster than GPU-based hyperscale cloud inference services. This remarkable increase in speed is transforming the user experience of AI applications, enabling real-time iterations and enhancing intelligence through additional agentic computation.About The RoleCerebras is expanding its Machine Learning team to spearhead a new initiative that aligns with our existing teams. We are seeking a Principal Investigator to collaborate with our ML leaders in shaping this new effort while building the team and enhancing our capabilities. This new team will work in concert with our current ML divisions: Field ML, which directly engages with customers, Applied ML, which develops new ML capabilities and applications, and Core ML, which adapts ML algorithms to leverage the unique features of Cerebras hardware. The new team may undertake similar or complementary responsibilities.The new team will focus on areas such as:Post-training and reinforcement learning: Enhancing model deployment quality through advanced training, tuning, and reinforcement learning techniques, concentrating on specific downstream tasks;Dataset curation and optimization: Implementing strategies to gather and select high-quality data, facilitating quicker and higher-quality model training and tuning;LLM Pretraining: Engaging in...
Illumio builds technology to contain ransomware and security breaches, helping organizations defend against cyber threats. The Illumio AI Security Graph underpins a platform that spots and contains threats in hybrid multi-cloud setups, aiming to stop attacks before they spread. Illumio is recognized as a leader in microsegmentation and supports Zero Trust architectures for critical infrastructure. The engineering team focuses on advancing cybersecurity through leadership, autonomy, and a strong sense of ownership. Engineers here develop and maintain a scalable SaaS platform using cloud-native tools, with deployments in both cloud and on-premises environments. Precision, quality, and collaboration shape the team's work, and engineers are encouraged to take initiative at every level. This Senior Machine Learning Engineer role is based onsite at Illumio’s Sunnyvale headquarters. The position centers on designing and scaling systems that power Illumio’s AI-driven security platform. Work involves handling large-scale data, distributed systems, and building advanced AI agents. Key Responsibilities Design and optimize high-throughput, event-driven systems with Apache Kafka to support real-time data flows. Develop and maintain large-scale data pipelines using Apache Spark or Flink for high-volume analytics and AI features. Create advanced AI agents that handle autonomous planning, memory management, and reliable tool use in distributed environments. Lead architectural design for containerized services on Kubernetes, focusing on availability and scalability across cloud platforms such as AWS, Azure, and GCP.
Full-time|$126K/yr - $423K/yr|On-site|Sunnyvale, California, United States
About Applied IntuitionApplied Intuition, Inc. is at the forefront of advancing physical AI technologies. Established in 2017 and currently valued at $15 billion, this Silicon Valley powerhouse is dedicated to developing the essential digital infrastructure that will empower intelligent operations in every vehicle and machine worldwide. Our innovative solutions cater to the automotive, defense, trucking, construction, mining, and agriculture sectors, focusing on three pivotal areas: tools and infrastructure, operating systems, and autonomous capabilities. Our reputation is underscored by the trust placed in us by 18 of the top 20 global automakers and the United States military, among others. Our headquarters is in Sunnyvale, California, with additional offices in Washington, D.C.; San Diego; Ft. Walton Beach, Florida; Ann Arbor, Michigan; London; Stuttgart; Munich; Stockholm; Bangalore; Seoul; and Tokyo. Discover more at applied.co.We promote a collaborative in-office culture and expect our team members to primarily work from their Applied Intuition office five days a week. However, we also value flexibility, allowing our employees to manage their schedules responsibly, which may include occasional remote work, starting the day with morning meetings from home, or leaving early to meet family obligations.About the Role and TeamWe are seeking enthusiastic Research Scientists to join our dynamic Research Group at Applied Intuition. Our mission is to develop pioneering technologies that drive the evolution of physical AI, particularly in two transformative applications: end-to-end autonomous driving and general-purpose robotics. Our team comprises distinguished experts from leading institutions and companies, celebrated for their remarkable contributions to both academia and industry, including eight Best Paper awards at prestigious conferences such as CVPR and ICRA. Learn more about our research initiatives at appliedintuition.com/research.With access to industry-leading tools and infrastructure, our researchers can leverage millions of miles of data from extensive fleets and implement their innovative methods across diverse autonomous and robotic systems, including self-driving vehicles and autonomous machinery.
About the Institute of Foundation ModelsWe are a pioneering research lab focused on the development, understanding, application, and risk management of foundation models. Our mission is to propel research forward, cultivate the next generation of AI innovators, and make significant contributions to a knowledge-driven economy.Join our dynamic team and engage in the heart of innovative foundation model training, collaborating with top-tier researchers, data scientists, and engineers. Tackle groundbreaking challenges in AI development and contribute to transformative AI solutions that have the potential to revolutionize industries. Your strategic and innovative problem-solving skills will be vital in establishing MBZUAI as a global center for high-performance computing in deep learning, enabling impactful discoveries that inspire the future of AI innovation.Role OverviewDevelop and Enhance Distributed Pre-Training Frameworks· Implement DeepSpeed / FSDP / Megatron-LM on multi-node GPU clusters.· Design robust launch scripts, resilient checkpoints, and job monitoring systems (e.g., NCCL/GLOO/GPU).Transform Mathematical Concepts into High-Performance Production Code· Prototype novel optimizers or attention mechanisms using PyTorch/NumPy/JAX or similar frameworks.· Convert prototypes into efficient CUDA/Triton kernels with custom gradients and performance tests.Enhance Training Efficiency and Stability· Lead efforts in mixed-precision training, integrating bf16, fp8, etc., into regular workflows while assessing accuracy versus speed improvements and analyzing numerical stability.· Utilize kernel fusion, communication tuning, and memory optimization to achieve state-of-the-art throughput.Accelerate Research Progress· Develop logging and metrics systems, along with experiment-tracking tools, to facilitate rapid iteration.· Design ablation studies and statistical tests that validate or challenge new concepts.· Guide interns and junior engineers through clear asynchronous design documentation and code reviews.You will collaborate closely with researchers, deliver production code, and shape the landscape of large language models.
Full-time|$204K/yr - $343K/yr|On-site|Sunnyvale, California, United States
About Applied IntuitionApplied Intuition, Inc. is at the forefront of advancing physical AI technologies. Established in 2017 and currently valued at $15 billion, our Silicon Valley-based company is dedicated to creating the essential digital infrastructure that empowers intelligence across all moving machines globally. Our solutions serve critical sectors including automotive, defense, trucking, construction, mining, and agriculture, focusing on three main domains: tools and infrastructure, operating systems, and autonomy. Trusted by 18 of the top 20 global automakers, as well as the U.S. military and its allies, Applied Intuition is committed to delivering unparalleled physical intelligence solutions. Our headquarters is located in Sunnyvale, California, complemented by offices in Washington, D.C.; San Diego; Ft. Walton Beach, Florida; Ann Arbor, Michigan; London; Stuttgart; Munich; Stockholm; Bangalore; Seoul; and Tokyo. For more information, visit applied.co.We prioritize in-office collaboration and expect our employees to work primarily from our Applied Intuition office five days a week. However, we value flexibility and trust our team members to manage their schedules responsibly. This may include occasional remote work, starting the day with morning meetings from home before heading to the office, or leaving early when necessary to accommodate personal commitments.About the RoleAs an Engineering Manager on our Machine Learning Platform team, you will lead an exceptional group of engineers dedicated to building the infrastructure that enables Physical AI at scale. Your team will oversee three pivotal areas: Training & Inference Orchestration, where we develop frameworks to efficiently schedule and execute extensive tasks across thousands of GPUs; GPU Cluster Architecture, where we design and expand what will become the industry's largest GPU cluster for Physical AI; and Performance Optimization, where we maximize hardware utilization, throughput, and cost efficiency for large-scale training and inference workloads. You will collaborate at the intersection of systems engineering and machine learning, working directly with stack development and research teams to eliminate bottlenecks and expedite the transition from experimentation to production.
Full-time|$125K/yr - $222K/yr|On-site|Sunnyvale, California, United States
About Applied IntuitionApplied Intuition, Inc. is at the forefront of advancing physical AI technology. Established in 2017 and currently valued at $15 billion, this Silicon Valley powerhouse is creating the essential digital framework that will infuse intelligence into every moving machine worldwide. Our solutions serve key sectors including automotive, defense, trucking, construction, mining, and agriculture, focusing on three main areas: tools and infrastructure, operating systems, and autonomy. Our trusted solutions are utilized by 18 of the top 20 global automakers and the United States military along with its allies. Our headquarters is located in Sunnyvale, California, with additional offices in Washington, D.C., San Diego, Ft. Walton Beach, Florida, Ann Arbor, Michigan, London, Stuttgart, Munich, Stockholm, Bangalore, Seoul, and Tokyo. Find out more at applied.co.We prioritize in-office collaboration, expecting our employees to work from their Applied Intuition office five days a week, while also embracing flexibility. Employees are trusted to manage their schedules, which may include occasional remote work, starting the day with morning meetings from home, or leaving early for family commitments.Hear from an Engineer:About the roleWe are searching for a talented software engineer specializing in perception for autonomous vehicles or mobile robotics. Your role will involve enhancing perception modules within our autonomous vehicle framework, including the development of 4D world representations to facilitate seamless autonomy. You will also lead the design and implementation of computer vision and machine learning strategies that empower self-driving vehicles to navigate effectively.In this dynamic and customer-centric team environment, you will not only contribute your engineering skills but also gain insights into best practices within the evolving autonomy sector. Our fast-paced culture encourages innovation and collaboration.
At Cerebras Systems, we are at the forefront of AI technology, developing the world's largest AI chip that is 56 times larger than conventional GPUs. Our innovative wafer-scale architecture enables the computational power of dozens of GPUs on a single chip, simplifying programming to the ease of handling one device. This unique design allows us to achieve unparalleled training and inference speeds, empowering machine learning practitioners to seamlessly deploy large-scale ML applications without the complexity of managing numerous GPUs or TPUs.Our clientele includes leading model labs, global enterprises, and pioneering AI-native startups. Recently, OpenAI announced a multi-year partnership with Cerebras aimed at leveraging 750 megawatts of scale to revolutionize critical workloads through ultra-high-speed inference.Thanks to our groundbreaking wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, exceeding GPU-based hyperscale cloud inference services by over tenfold. This significant boost in speed is transforming the user experience of AI applications, facilitating real-time iteration and enhancing intelligence through added agentic computation.About The RoleAs an Applied Machine Learning Research Scientist at Cerebras, you will be instrumental in converting modern machine learning methodologies into scalable, high-performance systems. This position focuses on the intersection of modeling and systems, emphasizing the efficient execution of existing algorithms rather than merely publishing new ones. Your efforts will significantly influence the training, optimization, and deployment of large language models (LLMs) on one of the most sophisticated AI platforms in existence.You will collaborate closely with fellow researchers and senior engineers to enhance workflows for LLM pretraining, fine-tuning, and reinforcement learning-based post-training. Your responsibilities will encompass building training pipelines, debugging complex system behaviors, improving model quality, and refining data and evaluation strategies. Your contributions will have a direct and meaningful impact on advancing our capabilities in AI.
We are seeking a dynamic and experienced Manager for our AI and Machine Learning team at LinkedIn. In this role, you will lead a talented group of engineers and data scientists dedicated to developing cutting-edge solutions that enhance user experience and drive engagement across the platform. Your leadership will be crucial in shaping the direction of our AI initiatives, ensuring they align with our mission to connect the world's professionals.The ideal candidate will possess a strong background in machine learning algorithms, data analysis, and software development, as well as exceptional communication skills to effectively collaborate with cross-functional teams. If you are passionate about leveraging AI to create impactful solutions, we want to hear from you!
Full-time|On-site|Sunnyvale, CA; San Francisco, CA; Seattle, WA
Join our ETA Team at DoorDash as a Machine Learning Engineer. In this dynamic role, you will harness the power of machine learning to drive innovation and optimize our delivery processes. You will collaborate with cross-functional teams to develop and deploy scalable machine learning models that enhance customer experience and operational efficiency.
Full-time|$155K/yr - $200K/yr|On-site|Sunnyvale, CA
At Bee Genius, we are pioneering the future of work today with innovative AI solutions that transform industries.Job Overview: We are looking for a talented AI/Machine Learning Engineer to become a vital part of our dynamic team. In this role, you will leverage your expertise to develop and deploy cutting-edge machine learning models and algorithms aimed at addressing complex business challenges.Key Responsibilities:Design, build, and refine machine learning models and algorithms.Train and assess models using extensive datasets.Optimize models for enhanced performance and accuracy.Collaborate with data scientists and software engineers to integrate models into operational systems.Stay abreast of the latest trends in AI and machine learning technologies.Promote the ethical deployment of AI solutions.
Full-time|$159.1K/yr - $199.3K/yr|On-site|Sunnyvale, California, United States
About Applied IntuitionApplied Intuition, Inc. is at the forefront of advancing physical AI technology. Established in 2017 and currently valued at $15 billion, this Silicon Valley-based company is building the essential digital infrastructure to infuse intelligence into every moving machine worldwide. We cater to industries such as automotive, defense, trucking, construction, mining, and agriculture through three primary sectors: tools and infrastructure, operating systems, and autonomy. Our solutions are trusted by 18 of the top 20 global automakers, along with the United States military and its allies, to deliver exceptional physical intelligence. Our headquarters is located in Sunnyvale, California, with additional offices across Washington, D.C.; San Diego; Ft. Walton Beach, Florida; Ann Arbor, Michigan; London; Stuttgart; Munich; Stockholm; Bangalore; Seoul; and Tokyo. Discover more at applied.co.We are an in-office company, expecting our employees to primarily work from their Applied Intuition office five days a week. We understand the importance of flexibility and trust our employees to manage their schedules responsibly. This may include occasional remote work, starting the day with morning meetings from home before heading to the office, or leaving earlier when needed to accommodate family commitments.About the RoleWe are in search of a skilled software engineer with extensive experience in optimizing machine learning models and deploying them in production-grade embedded runtime environments. Your expertise will span the entire ML framework stack, including PyTorch, JAX, ONNX, TensorRT, CUDA, XLA, and Triton.At Applied Intuition, You Will:Lead ML performance optimization across various technologies for both on-road and off-road ADAS/AD stacks aimed at deployment on a range of embedded computing platforms.Devise compute usage strategies to enhance efficiency and minimize latency of model inference for compute boards chosen by our customers.Engage in model pruning and quantization, ensuring successful deployment on memory-constrained platforms.Collaborate closely with ML engineers and software developers to identify and optimize efficient model architecture solutions.Establish methodologies to...
Join our innovative team at Wayve as a Machine Learning Engineer specializing in Application Software. In this pivotal role, you will leverage your expertise in machine learning algorithms and software development to create cutting-edge applications that drive our technology forward. Collaborate with a diverse group of talented professionals to enhance our products and deliver exceptional solutions that meet our clients' needs.
Cerebras Systems is at the forefront of AI innovation, creating the world’s largest AI chip, which is 56 times larger than traditional GPUs. Our groundbreaking wafer-scale architecture delivers the computational power equivalent to dozens of GPUs on a single chip, combined with the programming simplicity of a unified device. This innovative approach allows us to offer unparalleled training and inference speeds, enabling machine learning practitioners to execute extensive ML applications seamlessly, without the complexities of managing multiple GPUs or TPUs.Cerebras boasts an impressive clientele, including premier model labs, global corporations, and pioneering AI startups. Recently, OpenAI announced a multi-year partnership with Cerebras, aimed at deploying 750 megawatts of scale, revolutionizing critical workloads with ultra-fast inference capabilities.Our unique wafer-scale architecture enables Cerebras Inference to provide the fastest Generative AI inference solution globally, surpassing GPU-based hyperscale cloud inference services by more than tenfold. This remarkable enhancement in speed is reshaping the AI application user experience, facilitating real-time iteration and boosting intelligence through enhanced computational capabilities.About The RoleThe Inference ML Engineering team at Cerebras Systems is committed to empowering our rapid generative inference solution through intuitive APIs, supported by a distributed runtime that operates on extensive clusters of our proprietary hardware. Our goal is to enable enterprises, developers, and researchers to fully harness the capabilities of our platform, leveraging its exceptional performance, scalability, and flexibility. The team collaborates closely with cross-functional groups, including compiler developers, cluster orchestrators, ML scientists, cloud architects, and product teams, to deliver impactful solutions that redefine the limits of ML performance and usability.As a Senior Software Engineer on the Inference ML Engineering team, you will be instrumental in designing and implementing APIs, ML features, and tools that facilitate the execution of state-of-the-art generative AI models on our custom hardware. Your role will involve architecting solutions that allow for seamless model translation and execution, ensuring high throughput and minimal latency while maintaining user-friendliness. You will lead technical initiatives and collaborate with other engineering teams to enhance our solutions.
Join our innovative team at intuitive as a Machine Learning Engineer, where you'll have the chance to work on cutting-edge AI technologies that are shaping the future. In this role, you will design, develop, and implement machine learning models that will drive impactful solutions across various sectors.As a critical member of our team, you will collaborate with data scientists and engineers to enhance our product offerings, ensuring they are not only effective but also scalable. This is an exceptional opportunity for those eager to leverage their skills in a thriving environment.