Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Experience Level
Mid to Senior
About the job
Job Title: System Administrator
Location: Sunnyvale, CA
Key Responsibilities:
Manage and administer Active Directory and Windows infrastructure.
Oversee Office 365 and Exchange email administration.
Diagnose and resolve issues related to server hardware and operating systems.
Install, configure, and maintain server infrastructure and associated equipment.
Support and maintain Data Center operations.
Essential Skills and Experience:
Profound knowledge of Active Directory and Windows Infrastructure.
Proven experience in building Active Directory Domains and Forests.
Experience upgrading Active Directory Domains (from 2008 to 2012).
Extensive implementation experience with Windows Infrastructure components such as ADFS, WSUS, CA, NPS, DNS, and DHCP, with CA and NPS experience considered a plus.
Minimum of 5 years of experience with Windows Server Operating Systems.
At least 3 years of experience with VMware vSphere.
Proficient in PowerShell scripting.
Strong understanding of DNS, DHCP, IIS, Group Policy, and WSUS.
Hands-on experience with Dell Server Hardware, preferably Dell Blades, including DRAC and OpenManage.
Job Title: System AdministratorLocation: Sunnyvale, CAKey Responsibilities:Manage and administer Active Directory and Windows infrastructure.Oversee Office 365 and Exchange email administration.Diagnose and resolve issues related to server hardware and operating systems.Install, configure, and maintain server infrastructure and associated equipment.Support a…
Kaseya seeks an Executive Assistant in Sunnyvale, CA to support its executive team. This position plays a key role in handling administrative work that helps daily operations run efficiently. Key responsibilities Oversee complex calendars and schedules for executives Arrange meetings, manage logistics, and prepare relevant materials Act as a communication link between executives and other teams Assist with coordinating tasks and projects as needed What we look for Strong organizational skills Ability to anticipate needs and show initiative Comfort with shifting priorities Experience supporting executives or senior leaders is considered a plus This role is located on site in Sunnyvale, CA.
Full-time|$160K/yr - $210K/yr|On-site|Sunnyvale, CA
About the TeamAt Taara, we are dedicated to transforming the world of connectivity. Originating from X, Google's Moonshot Factory, our mission is to bridge the digital divide by delivering high-speed, affordable internet through innovative light-based technologies. Join us in revolutionizing wireless optical communication and photonics chip technologies as we expand our reach globally.About the RoleWe are seeking a Systems Software Test Engineer to spearhead the automation and validation efforts for our comprehensive systems. This role is pivotal, as it involves not just testing APIs, but also architecting frameworks to ensure our software effectively controls high-precision hardware in real-time. You will play a crucial role in connecting software development with hardware reliability, ensuring that each code commit enhances our network's performance in real-world scenarios.Your Impact:Automated Framework Design: Develop and implement scalable automation frameworks to rigorously test embedded software and cloud integrations.Hardware-in-the-Loop Testing: Create and uphold HIL test benches, enabling real-time software interactions with physical hardware units under simulated field conditions.CI/CD Integration: Lead the incorporation of automated tests into the CI/CD pipeline, establishing quality gates for every software build.Comprehensive Data Path Validation: Collaborate with Cloud and Embedded teams to validate the data flow from the physical optical link to the backend monitoring dashboard.Performance Regression Testing: Develop automated regression suites to identify performance regressions impacting network throughput, latency, and stability.
Cerebras Systems is pioneering the realm of artificial intelligence with the world’s largest AI chip, boasting a size 56 times greater than traditional GPUs. Our innovative wafer-scale architecture delivers the computational power of dozens of GPUs on a single chip while maintaining the programming simplicity of a single device. This unique approach enables us to achieve unparalleled training and inference speeds, allowing machine learning professionals to seamlessly execute large-scale ML applications without the complexities of managing numerous GPUs or TPUs.Our clientele includes leading model laboratories, global enterprises, and avant-garde AI-native startups. Recently, OpenAI formed a multi-year partnership with Cerebras to harness 750 megawatts of scale, revolutionizing essential workloads with ultra high-speed inference.Thanks to our groundbreaking wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, exceeding GPU-based hyperscale cloud inference services by over ten times. This significant enhancement in speed is transforming the user experience of AI applications, facilitating real-time iteration and augmenting intelligence via enhanced agentic computation.About The RoleWe are seeking a dynamic Head of IT to establish and manage the internal technology infrastructure of a rapidly scaling organization operating at the forefront of AI hardware and software. This is not a conventional IT leadership position; it is a build-and-scale opportunity for someone who thrives in a dynamic environment.You will oversee the systems that Cerebras employees, contractors, and executives depend on daily, including laptops, identity management, SaaS, networking, collaboration tools, endpoint security, internal support, and essential IT controls necessary for a company of our maturity. You will ensure that our highly technical and fast-paced engineering workforce remains unimpeded while simultaneously fortifying the environment to meet the standards expected of a company at our stage, including SOX-grade ITGCs and SOC 2 compliance.
Cerebras Systems is pioneering the future of artificial intelligence with the development of the world's largest AI chip, which is an astonishing 56 times bigger than conventional GPUs. Our innovative wafer-scale architecture enables the AI computational power equivalent to dozens of GPUs on a single chip, while maintaining programming simplicity akin to that of a single device. This state-of-the-art approach allows us to deliver unparalleled training and inference speeds, enabling machine learning practitioners to seamlessly execute large-scale ML applications without the complexities of managing multiple GPUs or TPUs.We proudly serve a diverse clientele, including leading model labs, multinational corporations, and innovative AI-native startups. Notably, OpenAI recently announced a multi-year partnership with Cerebras to harness 750 megawatts of scale, revolutionizing critical workloads with ultra-high-speed inference.With our groundbreaking wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, achieving speeds over ten times faster than GPU-based hyperscale cloud inference services. This remarkable enhancement in speed is transforming the user experience for AI applications, unlocking real-time iteration and enriching intelligence through enhanced computational capabilities.About The RoleAs a Network Architect on the Cluster Architecture Team, you will collaborate closely with vendors, internal networking teams, and industry experts to create top-tier interconnect architecture for both current and future generations of Cerebras AI clusters. Your responsibilities will include developing proof-of-concept designs for new network features that promote a resilient and reliable network tailored for AI workloads. This role demands cross-functional collaboration and engagement with a variety of hardware components, including network devices and the Wafer-Scale Engine, as well as software across multiple layers of the stack, from host-side networking to cluster-level coordination. A strong understanding of network monitoring systems and debugging methodologies is essential.ResponsibilitiesDesign AI/ML and HPC Clusters.Identify and mitigate performance or efficiency bottlenecks, ensuring optimal resource utilization, low latency, and high-throughput communication.Lead technical projects involving multiple teams and diverse software and hardware components to realize advanced network solutions.
Full-time|$150K/yr - $260K/yr|On-site|Sunnyvale, CA
Cerebras Systems is at the forefront of AI technology, engineering the world’s largest AI chip, which is 56 times larger than conventional GPUs. Our innovative wafer-scale architecture enables unprecedented AI computational power, equivalent to dozens of GPUs operating as a single unit, thereby simplifying programming for machine learning tasks. This revolutionary approach not only provides unmatched training and inference speeds but also allows users to execute large-scale machine learning applications without the complexity of managing multiple GPUs or TPUs.Our esteemed clientele includes leading model laboratories, global corporations, and pioneering AI startups. Recently, OpenAI announced a multi-year collaboration with Cerebras, deploying 750 megawatts of processing capacity that revolutionizes critical workloads through ultra-fast inference.With our groundbreaking wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, outperforming GPU-based hyperscale cloud services by over 10 times. This significant increase in speed is reshaping the user experience of AI applications, facilitating real-time iterations and enhancing intelligence through advanced computational capabilities.
Cerebras Systems is at the forefront of AI technology, creating the world's largest AI chip—56 times the size of traditional GPUs. Our innovative wafer-scale architecture delivers the computational power of dozens of GPUs on a single chip while simplifying programming to the ease of a single device. This groundbreaking approach enables us to achieve unparalleled training and inference speeds, empowering machine learning professionals to seamlessly execute large-scale ML applications without the complexities of managing numerous GPUs or TPUs.Cerebras serves an impressive clientele that includes top model laboratories, multinational corporations, and pioneering AI-native startups. Recently, OpenAI announced a multi-year partnership with Cerebras, deploying 750 megawatts of scale to revolutionize critical workloads with ultra-high-speed inference.Our wafer-scale architecture also powers the fastest Generative AI inference solution globally, exceeding GPU-based hyperscale cloud inference services by over 10 times. This significant enhancement in speed is transforming the user experience of AI applications, facilitating real-time iteration and boosting intelligence through additional computational capabilities.About The RoleWe are in search of a talented Compiler Engineer to contribute to the design and implementation of new features within our CSL (Cerebras Software Language) compiler. CSL is a Zig-like programming language utilized both internally and externally to program our wafer-scale engine (WSE).The language offers high-level abstractions to simplify programming the wafer WSE while providing low-level access to hardware internals for optimal hardware utilization. The compiler leverages MLIR infrastructure to translate CSL into LLVM IR, which is further compiled by a dedicated LLVM mid-end/backend into executable files.ResponsibilitiesDesign and implement front-end language features, semantic analysis, intermediate representations, and lowering pipelines from CSL to MLIR dialect(s) and LLVM IR.Develop and enhance abstraction layers between the CSL language and the underlying hardware.
About the Role Intuitive Surgical, Inc. is seeking a Mac Systems Administrator to support and maintain the company’s Mac environment. This role focuses on keeping our IT infrastructure reliable and efficient for teams working on medical technology solutions. Location Sunnyvale, California
About CeribellCeribell is a pioneering medical technology company dedicated to revolutionizing the diagnosis and management of patients with serious neurological conditions. Our flagship product, the Ceribell System, introduces an innovative, point-of-care electroencephalography (EEG) platform that effectively addresses the unmet needs of patients in acute care settings. This technology is currently utilized in numerous community hospitals, prominent academic institutions, and major integrated delivery networks (IDNs) across the nation. Our team is united by a deep commitment to transforming critical care through our rapid seizure detection technology. Join us in this impactful movement!Position OverviewThe Contracts Manager, reporting directly to the Head of Contracts, will oversee the comprehensive management and compliance of corporate, customer, vendor, and government contracts. This pivotal role serves as a vital link between various departments, including Sales, Marketing, HR, IT, Engineering, Clinical Affairs, and Finance, ensuring all agreements align with internal policies and overarching business objectives.The ideal candidate will manage the complete contract lifecycle, ensuring strict adherence to federal, state, and local regulations. A key responsibility of this position is to maintain SOX compliance and enforce robust internal controls, guaranteeing that all contractual obligations meet the rigorous reporting standards expected of a public company.Key Responsibilities:Negotiation & Strategy: Engage in the negotiation of both straightforward and complex customer, commercial, and strategic agreements, including GPO, IDN, Data Privacy/Security, Clinical Study, partnership, and vendor agreements, while protecting the company's interests and balancing business, financial, and legal objectives.Advisory & Compliance: Keep Legal, Sales, Operations, and Executive teams informed about legal and healthcare compliance matters (e.g., AKS/Stark) and negotiated terms, utilizing a solid understanding of how contract terms affect Revenue Recognition.Complex Problem Solving: Address complex issues of broad scope requiring thorough analysis of various factors, collaborating with the Head of Contracts and exercising independent judgment when escalating concerns.CLM Optimization: Act as a power user for the Ironclad CLM; pinpoint opportunities to automate workflows, enhance template modularity, and streamline the quote-to-contract lifecycle.Process Improvement: Lead the design, implementation, and communication of process enhancements, enforce existing contracting rules, and provide a clear channel for resolving legal inquiries.
Full-time|$160K/yr - $210K/yr|On-site|Sunnyvale, CA
About Our Team:At Taara, a visionary initiative from X, Google's Moonshot Factory, we are dedicated to connecting billions who currently lack reliable and affordable internet access. Our innovative approach harnesses the power of light to provide faster, more cost-effective, and dependable connectivity solutions. Join us in revolutionizing the future of wireless optical communication and photonics chip technology as we strive to bridge the digital divide for communities around the globe.Role Overview:We are in search of a talented Linux System Software Engineer to pioneer the development of the next-generation operating system for wireless broadband networks.The successful candidate will bring extensive experience in building system-level software within a Linux environment, adept in multi-threaded programming, state-machine and event-driven programming, and familiar with network monitoring protocols (such as gRPC, SNMP, OpenTelemetry). Proficiency in integrating with cloud-based back-end systems, along with strong skills in data structures, algorithms, and programming languages including Golang, Python, and C/C++ for development and unit/regression testing will be crucial for this role.Your Impact:Design and implement robust system-level software applications for IoT/network devices.Collaborate with cloud/backend engineers to develop visualization and troubleshooting tools.Stay informed about emerging IoT device trends and technologies.Work closely with engineers and operators to establish telemetry and monitoring solutions.Diagnose and resolve challenges encountered during large-scale field deployments.Qualifications:Bachelor's degree in Computer Science, Computer Networking, Electrical Engineering, or a related discipline.5+ years of experience in application development on Linux-based operating systems, particularly within telecommunications, ISP, or networking sectors.Proficiency in programming languages Golang, Python, and C/C++ for both development and testing.Familiarity with streaming and monitoring protocols and strategies for service provider networks (e.g., gRPC, OpenTelemetry, SNMP).Experience with multi-threaded programming techniques, state-machine design, and event-driven programming.
At DigitalFish, we are committed to empowering our clients by delivering cutting-edge technologies that revolutionize digital media creation and consumption for millions of users.We collaborate with top-tier digital media organizations, positioning ourselves at the forefront of their initiatives to develop innovative platforms and immersive experiences. Our esteemed clients include industry giants such as Apple, Google, Meta, Disney, DreamWorks, Activision, Technicolor, ESPN, LEGO, NASA, and many more.YOUR ROLEAs a vital member of our agile team, you will be instrumental in advancing imaging technology for camera capture and crafting augmented reality experiences that enrich human perception. Your projects will involve camera color processing, enhancing features related to human color perception, and utilizing tools like Unity and Blender for object rendering.
Full-time|$120K/yr - $190K/yr|On-site|Sunnyvale, California, United States
About Applied IntuitionFounded in 2017 and currently valued at $15 billion, Applied Intuition, Inc. is at the forefront of physical AI, developing the digital infrastructure essential for infusing intelligence into every moving machine worldwide. Our solutions cater to diverse sectors, including automotive, defense, trucking, construction, mining, and agriculture, focusing on tools and infrastructure, operating systems, and autonomy. With 18 of the top 20 global automakers and the U.S. military among our trusted clients, we are dedicated to delivering exceptional physical intelligence solutions. Our headquarters are located in Sunnyvale, California, with additional offices in Washington, D.C.; San Diego; Fort Walton Beach, Florida; Ann Arbor, Michigan; London; Stuttgart; Munich; Stockholm; Bangalore; Seoul; and Tokyo. Discover more at applied.co.At Applied Intuition, we thrive on in-office collaboration and expect our employees to work from their office 5 days a week. We appreciate the need for flexibility and empower our team members to manage their schedules responsibly, which may include occasional remote work, starting the day with remote meetings, or adjusting hours for personal commitments.About the RoleWe are seeking a skilled GTM Systems Administrator to join our Revenue Strategy & Operations team and enhance the technical framework of our go-to-market technology stack, with a primary emphasis on Salesforce. In this pivotal position, you will report to the Director of Revenue Operations & Strategy and collaborate closely with various teams including sales, marketing, finance, legal, and customer success to craft and implement scalable solutions that drive our growth.Your role will bridge business strategy and technical execution, transforming complex go-to-market requirements into effective system configurations and integrations. This is a high-impact position with a direct influence on customer engagement and revenue generation.At Applied Intuition, You Will:Design, develop, and manage both standard and custom Salesforce configurations, including Apex classes, triggers, and Lightning Web Components.
Full-time|$190K/yr - $230K/yr|On-site|Sunnyvale, CA
Cerebras Systems is at the forefront of AI innovation, creating the world's largest AI chip, which is 56 times larger than traditional GPUs. Our pioneering wafer-scale architecture delivers exceptional AI computational power equivalent to dozens of GPUs on a single chip, offering users unparalleled simplicity and efficiency. This unique approach enables us to provide industry-leading training and inference speeds, allowing machine learning practitioners to run extensive ML applications seamlessly without the complexities of managing multiple GPUs or TPUs.Our clientele includes renowned model labs, leading global enterprises, and innovative AI-first startups. Recently, OpenAI announced a multi-year collaboration with Cerebras, leveraging 750 megawatts of scale to revolutionize critical workloads with ultra-high-speed inference.Thanks to our cutting-edge wafer-scale technology, Cerebras Inference offers the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than typical GPU-based hyperscale cloud services. This significant speed enhancement is reshaping the user experience in AI applications, enabling real-time iteration and enhancing intelligence through advanced computation.About The RoleAs a Senior Mechanical Engineer at Cerebras, you will spearhead the design of innovative mechanical systems for our next-generation wafer-scale engine. Your key responsibilities will encompass ensuring adherence to specifications, validating manufacturability, and delivering high-quality products in a dynamic environment, addressing some of the most intricate challenges in the rapidly advancing AI landscape.In this role, you will be instrumental in developing the mechanical infrastructure for Cerebras' custom hardware systems.Rapidly iterate on designs and analyses to inform high-level systems decisions and guide the overall product strategy.Provide extensive support for environmental and performance testing on hardware, validate analyses, and ensure compliance with design criteria.Take ownership of technical deliverables.Conduct first-article inspections and functional analyses, identifying and resolving issues as they arise.Collaborate closely with design, manufacturing, production, diagnostics, and embedded software engineering teams, contractors, and suppliers.Perform detailed structural analyses and simulations to optimize designs.
Join our dynamic team as a Part-time Visual Merchandiser in Sunnyvale, CA. In this role, you will be responsible for creating visually appealing displays that attract customers and enhance their shopping experience. Your creativity and attention to detail will play a pivotal role in embodying our brand's aesthetic and driving sales.
Full-time|$150K/yr - $270K/yr|On-site|Sunnyvale, CA
Cerebras Systems is revolutionizing the world of artificial intelligence with our groundbreaking wafer-scale architecture, which is 56 times larger than traditional GPUs. Our innovative design provides unparalleled AI compute power, allowing users to run extensive machine learning applications effortlessly, without the complexities of managing multiple GPUs or TPUs.Our esteemed clientele includes leading model labs, major global enterprises, and pioneering AI startups. Recently, OpenAI announced a multi-year partnership with Cerebras to deploy 750 megawatts of transformative computing power, enabling ultra-fast inference for critical workloads.With our advanced wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than GPU-based hyperscale cloud services. This leap in performance is redefining the user experience for AI applications, facilitating real-time iteration and enhancing intelligence through additional agentic computation.About The RoleJoin our dedicated physical design team as a 3D Physical Design Engineer, where you will focus on the design and analysis of 3D integrated products. This role requires a blend of traditional ASIC/SoC physical design expertise, along with skills in packaging, power management, clock distribution, and thermal analysis. Collaborating closely with the architecture and RTL teams, you will contribute to research and development efforts on innovative concepts for 3D integration.
Cerebras Systems is at the forefront of AI innovation, creating the world’s largest AI chip, which is 56 times the size of conventional GPUs. Our unique wafer-scale architecture not only provides the power equivalent to dozens of GPUs on a single chip but does so with the simplicity of programming a single device. This cutting-edge approach allows us to achieve unparalleled training and inference speeds, enabling machine learning professionals to seamlessly execute large-scale ML applications without the complexity of managing multiple GPUs or TPUs.We proudly serve a diverse clientele, including leading model labs, renowned global enterprises, and pioneering AI-native startups. Notably, OpenAI has recently announced a multi-year partnership with us to leverage our technology in deploying 750 megawatts of scale, revolutionizing key workloads with ultra-fast inference capabilities.Our groundbreaking wafer-scale architecture empowers Cerebras Inference to deliver the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than traditional GPU-based hyperscale cloud inference services. This significant leap in speed redefines the user experience of AI applications, facilitating real-time iteration and enhancing intelligence through advanced computation.
Cerebras Systems is at the forefront of AI innovation, creating the world's largest AI chip that is 56 times larger than traditional GPUs. Our unique wafer-scale architecture delivers the computational power of numerous GPUs on a single chip, simplifying programming while providing unparalleled training and inference speeds. This revolutionary approach enables users to run extensive machine learning applications effortlessly, eliminating the complexity of managing multiple GPUs or TPUs.Cerebras serves a diverse clientele, including leading model labs, major global enterprises, and pioneering AI-native startups. Recently, OpenAI announced a multi-year partnership with Cerebras, aiming to deploy 750 megawatts of scale that will redefine key workloads with ultra-high-speed inference.Our groundbreaking wafer-scale architecture ensures that Cerebras Inference provides the fastest Generative AI inference solution globally, achieving speeds that are over ten times faster than GPU-based hyperscale cloud services. This significant enhancement in performance is transforming the user experience of AI applications, facilitating real-time iteration and boosting intelligence through enhanced computational capabilities.About The RoleWe are seeking a Senior Performance Analyst to join our dynamic Product team. As a specialist in state-of-the-art inference performance, you will be the go-to expert on how Cerebras measures up against alternative inference providers in terms of pricing and performance. This role combines performance benchmarking from foundational principles with competitive intelligence. The position revolves around two key pillars:Performance BenchmarkingYou will develop, execute, and sustain reproducible benchmarks that assess Cerebras inference performance for actual customer workloads. This includes metrics such as tokens per second, time to first token, latency under concurrency, and total cost of ownership (TCO).Competitive AnalysisYou will analyze market trends and competitor offerings to position Cerebras effectively within the inference landscape.
About the Institute of Foundation ModelsWe are a pioneering research laboratory focused on developing, understanding, utilizing, and managing foundation models. Our mission is to propel research, cultivate the next generation of AI innovators, and create transformative impacts within a knowledge-driven economy.Join our dynamic team and seize the opportunity to engage in groundbreaking foundation model training, collaborating with elite researchers, data scientists, and engineers to address the most pressing challenges in AI development. You will contribute to the creation of innovative AI solutions with the potential to revolutionize industries. Your strategic and creative problem-solving abilities will play a crucial role in establishing MBZUAI as a global center for high-performance computing in deep learning, fostering discoveries that will motivate future AI trailblazers.The RoleAs a Machine Learning Engineer at the Institute of Foundation Models, your main duty will be to design and implement cutting-edge machine learning models that tackle real-world issues, pushing the limits of artificial intelligence research. You will work collaboratively with diverse teams to deploy scalable solutions, furthering MBZUAI’s goal of driving significant AI advancements and solidifying the institution’s status as a leader in the international AI research community. Your expertise will be vital in enhancing the performance of large-scale machine learning models and aiding in the development of transformative AI tools that can reshape industries globally.
Full-time|$175K/yr - $275K/yr|On-site|Sunnyvale, CA
Cerebras Systems is a pioneer in AI technology, renowned for creating the world’s largest AI chip, which is an astounding 56 times larger than traditional GPUs. Our innovative wafer-scale architecture provides unparalleled AI computing capabilities equivalent to dozens of GPUs on a single chip, while ensuring the programming simplicity of a single device. This unique approach enables Cerebras to achieve unmatched training and inference speeds, allowing machine learning practitioners to seamlessly execute large-scale ML applications without the complexity of managing vast arrays of GPUs or TPUs. Cerebras' impressive clientele includes leading model laboratories, global enterprises, and cutting-edge AI-focused startups. Recently, OpenAI announced a multi-year partnership with Cerebras, enhancing transformative workloads through ultra-high-speed inference utilizing 750 megawatts of scale. With our groundbreaking wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than GPU-based hyperscale cloud inference services. This significant speed enhancement is revolutionizing the user experience in AI applications, enabling real-time iterations and boosting intelligence through advanced computational capabilities.About The RoleAs the Lead RTL Design Engineer, you will play a pivotal role in our exceptional team responsible for designing and developing the next iterations of the Cerebras Wafer Scale Engine (WSE). This position demands extensive expertise in RTL design and integration, with a strong emphasis on delivering high-performance, power-efficient, and scalable solutions. Additionally, you will oversee collaboration with external ASIC vendors and work closely with design verification, physical design, software, and system teams to translate innovative semiconductor architectures from concept to production, addressing the unique challenges associated with building WSE systems.
Full-time|On-site|Sunnyvale, California, United States
About Applied IntuitionApplied Intuition, Inc. is at the forefront of advancing physical AI technology. Established in 2017 and currently valued at $15 billion, this Silicon Valley powerhouse is dedicated to developing the digital infrastructure necessary to infuse intelligence into every mobile machine globally. Applied Intuition serves various sectors, including automotive, defense, trucking, construction, mining, and agriculture, focusing on three main areas: tools and infrastructure, operating systems, and autonomy. Our solutions are trusted by 18 of the top 20 global automakers, as well as the U.S. military and its allies, to deliver cutting-edge physical intelligence. Headquartered in Sunnyvale, California, we also have offices in Washington, D.C.; San Diego; Ft. Walton Beach, Florida; Ann Arbor, Michigan; London; Stuttgart; Munich; Stockholm; Bangalore; Seoul; and Tokyo. Discover more at applied.co.We prioritize in-office collaboration, expecting our employees to work from their Applied Intuition office five days a week. However, we appreciate the significance of flexibility and trust our team to manage their schedules responsibly. This may include occasional remote work, starting the day with morning meetings from home, or adjusting hours for family commitments.About the RoleJoin Applied Intuition's Self-Driving System (SDS) for Mining & Construction, where we are building a modular, production-grade autonomy and operator-assist stack for construction and mining vehicles. We are seeking talented software engineers to create the tooling and infrastructure that enables rapid development, testing, and deployment of autonomous systems at scale. If you are passionate about tackling complex systems and making a tangible impact, this is where architecture converges with autonomy.Your ResponsibilitiesDesign and develop scalable Python-based tooling and infrastructure to bolster development across the autonomy stack.Create systems that facilitate fleet management and vehicle communication.Contribute to the architecture of scalable systems that accommodate various vehicle platforms and operating environments.Collaborate closely with engineers across different domains.
Job Title: System AdministratorLocation: Sunnyvale, CAKey Responsibilities:Manage and administer Active Directory and Windows infrastructure.Oversee Office 365 and Exchange email administration.Diagnose and resolve issues related to server hardware and operating systems.Install, configure, and maintain server infrastructure and associated equipment.Support a…
Kaseya seeks an Executive Assistant in Sunnyvale, CA to support its executive team. This position plays a key role in handling administrative work that helps daily operations run efficiently. Key responsibilities Oversee complex calendars and schedules for executives Arrange meetings, manage logistics, and prepare relevant materials Act as a communication link between executives and other teams Assist with coordinating tasks and projects as needed What we look for Strong organizational skills Ability to anticipate needs and show initiative Comfort with shifting priorities Experience supporting executives or senior leaders is considered a plus This role is located on site in Sunnyvale, CA.
Full-time|$160K/yr - $210K/yr|On-site|Sunnyvale, CA
About the TeamAt Taara, we are dedicated to transforming the world of connectivity. Originating from X, Google's Moonshot Factory, our mission is to bridge the digital divide by delivering high-speed, affordable internet through innovative light-based technologies. Join us in revolutionizing wireless optical communication and photonics chip technologies as we expand our reach globally.About the RoleWe are seeking a Systems Software Test Engineer to spearhead the automation and validation efforts for our comprehensive systems. This role is pivotal, as it involves not just testing APIs, but also architecting frameworks to ensure our software effectively controls high-precision hardware in real-time. You will play a crucial role in connecting software development with hardware reliability, ensuring that each code commit enhances our network's performance in real-world scenarios.Your Impact:Automated Framework Design: Develop and implement scalable automation frameworks to rigorously test embedded software and cloud integrations.Hardware-in-the-Loop Testing: Create and uphold HIL test benches, enabling real-time software interactions with physical hardware units under simulated field conditions.CI/CD Integration: Lead the incorporation of automated tests into the CI/CD pipeline, establishing quality gates for every software build.Comprehensive Data Path Validation: Collaborate with Cloud and Embedded teams to validate the data flow from the physical optical link to the backend monitoring dashboard.Performance Regression Testing: Develop automated regression suites to identify performance regressions impacting network throughput, latency, and stability.
Cerebras Systems is pioneering the realm of artificial intelligence with the world’s largest AI chip, boasting a size 56 times greater than traditional GPUs. Our innovative wafer-scale architecture delivers the computational power of dozens of GPUs on a single chip while maintaining the programming simplicity of a single device. This unique approach enables us to achieve unparalleled training and inference speeds, allowing machine learning professionals to seamlessly execute large-scale ML applications without the complexities of managing numerous GPUs or TPUs.Our clientele includes leading model laboratories, global enterprises, and avant-garde AI-native startups. Recently, OpenAI formed a multi-year partnership with Cerebras to harness 750 megawatts of scale, revolutionizing essential workloads with ultra high-speed inference.Thanks to our groundbreaking wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, exceeding GPU-based hyperscale cloud inference services by over ten times. This significant enhancement in speed is transforming the user experience of AI applications, facilitating real-time iteration and augmenting intelligence via enhanced agentic computation.About The RoleWe are seeking a dynamic Head of IT to establish and manage the internal technology infrastructure of a rapidly scaling organization operating at the forefront of AI hardware and software. This is not a conventional IT leadership position; it is a build-and-scale opportunity for someone who thrives in a dynamic environment.You will oversee the systems that Cerebras employees, contractors, and executives depend on daily, including laptops, identity management, SaaS, networking, collaboration tools, endpoint security, internal support, and essential IT controls necessary for a company of our maturity. You will ensure that our highly technical and fast-paced engineering workforce remains unimpeded while simultaneously fortifying the environment to meet the standards expected of a company at our stage, including SOX-grade ITGCs and SOC 2 compliance.
Cerebras Systems is pioneering the future of artificial intelligence with the development of the world's largest AI chip, which is an astonishing 56 times bigger than conventional GPUs. Our innovative wafer-scale architecture enables the AI computational power equivalent to dozens of GPUs on a single chip, while maintaining programming simplicity akin to that of a single device. This state-of-the-art approach allows us to deliver unparalleled training and inference speeds, enabling machine learning practitioners to seamlessly execute large-scale ML applications without the complexities of managing multiple GPUs or TPUs.We proudly serve a diverse clientele, including leading model labs, multinational corporations, and innovative AI-native startups. Notably, OpenAI recently announced a multi-year partnership with Cerebras to harness 750 megawatts of scale, revolutionizing critical workloads with ultra-high-speed inference.With our groundbreaking wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, achieving speeds over ten times faster than GPU-based hyperscale cloud inference services. This remarkable enhancement in speed is transforming the user experience for AI applications, unlocking real-time iteration and enriching intelligence through enhanced computational capabilities.About The RoleAs a Network Architect on the Cluster Architecture Team, you will collaborate closely with vendors, internal networking teams, and industry experts to create top-tier interconnect architecture for both current and future generations of Cerebras AI clusters. Your responsibilities will include developing proof-of-concept designs for new network features that promote a resilient and reliable network tailored for AI workloads. This role demands cross-functional collaboration and engagement with a variety of hardware components, including network devices and the Wafer-Scale Engine, as well as software across multiple layers of the stack, from host-side networking to cluster-level coordination. A strong understanding of network monitoring systems and debugging methodologies is essential.ResponsibilitiesDesign AI/ML and HPC Clusters.Identify and mitigate performance or efficiency bottlenecks, ensuring optimal resource utilization, low latency, and high-throughput communication.Lead technical projects involving multiple teams and diverse software and hardware components to realize advanced network solutions.
Full-time|$150K/yr - $260K/yr|On-site|Sunnyvale, CA
Cerebras Systems is at the forefront of AI technology, engineering the world’s largest AI chip, which is 56 times larger than conventional GPUs. Our innovative wafer-scale architecture enables unprecedented AI computational power, equivalent to dozens of GPUs operating as a single unit, thereby simplifying programming for machine learning tasks. This revolutionary approach not only provides unmatched training and inference speeds but also allows users to execute large-scale machine learning applications without the complexity of managing multiple GPUs or TPUs.Our esteemed clientele includes leading model laboratories, global corporations, and pioneering AI startups. Recently, OpenAI announced a multi-year collaboration with Cerebras, deploying 750 megawatts of processing capacity that revolutionizes critical workloads through ultra-fast inference.With our groundbreaking wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, outperforming GPU-based hyperscale cloud services by over 10 times. This significant increase in speed is reshaping the user experience of AI applications, facilitating real-time iterations and enhancing intelligence through advanced computational capabilities.
Cerebras Systems is at the forefront of AI technology, creating the world's largest AI chip—56 times the size of traditional GPUs. Our innovative wafer-scale architecture delivers the computational power of dozens of GPUs on a single chip while simplifying programming to the ease of a single device. This groundbreaking approach enables us to achieve unparalleled training and inference speeds, empowering machine learning professionals to seamlessly execute large-scale ML applications without the complexities of managing numerous GPUs or TPUs.Cerebras serves an impressive clientele that includes top model laboratories, multinational corporations, and pioneering AI-native startups. Recently, OpenAI announced a multi-year partnership with Cerebras, deploying 750 megawatts of scale to revolutionize critical workloads with ultra-high-speed inference.Our wafer-scale architecture also powers the fastest Generative AI inference solution globally, exceeding GPU-based hyperscale cloud inference services by over 10 times. This significant enhancement in speed is transforming the user experience of AI applications, facilitating real-time iteration and boosting intelligence through additional computational capabilities.About The RoleWe are in search of a talented Compiler Engineer to contribute to the design and implementation of new features within our CSL (Cerebras Software Language) compiler. CSL is a Zig-like programming language utilized both internally and externally to program our wafer-scale engine (WSE).The language offers high-level abstractions to simplify programming the wafer WSE while providing low-level access to hardware internals for optimal hardware utilization. The compiler leverages MLIR infrastructure to translate CSL into LLVM IR, which is further compiled by a dedicated LLVM mid-end/backend into executable files.ResponsibilitiesDesign and implement front-end language features, semantic analysis, intermediate representations, and lowering pipelines from CSL to MLIR dialect(s) and LLVM IR.Develop and enhance abstraction layers between the CSL language and the underlying hardware.
About the Role Intuitive Surgical, Inc. is seeking a Mac Systems Administrator to support and maintain the company’s Mac environment. This role focuses on keeping our IT infrastructure reliable and efficient for teams working on medical technology solutions. Location Sunnyvale, California
About CeribellCeribell is a pioneering medical technology company dedicated to revolutionizing the diagnosis and management of patients with serious neurological conditions. Our flagship product, the Ceribell System, introduces an innovative, point-of-care electroencephalography (EEG) platform that effectively addresses the unmet needs of patients in acute care settings. This technology is currently utilized in numerous community hospitals, prominent academic institutions, and major integrated delivery networks (IDNs) across the nation. Our team is united by a deep commitment to transforming critical care through our rapid seizure detection technology. Join us in this impactful movement!Position OverviewThe Contracts Manager, reporting directly to the Head of Contracts, will oversee the comprehensive management and compliance of corporate, customer, vendor, and government contracts. This pivotal role serves as a vital link between various departments, including Sales, Marketing, HR, IT, Engineering, Clinical Affairs, and Finance, ensuring all agreements align with internal policies and overarching business objectives.The ideal candidate will manage the complete contract lifecycle, ensuring strict adherence to federal, state, and local regulations. A key responsibility of this position is to maintain SOX compliance and enforce robust internal controls, guaranteeing that all contractual obligations meet the rigorous reporting standards expected of a public company.Key Responsibilities:Negotiation & Strategy: Engage in the negotiation of both straightforward and complex customer, commercial, and strategic agreements, including GPO, IDN, Data Privacy/Security, Clinical Study, partnership, and vendor agreements, while protecting the company's interests and balancing business, financial, and legal objectives.Advisory & Compliance: Keep Legal, Sales, Operations, and Executive teams informed about legal and healthcare compliance matters (e.g., AKS/Stark) and negotiated terms, utilizing a solid understanding of how contract terms affect Revenue Recognition.Complex Problem Solving: Address complex issues of broad scope requiring thorough analysis of various factors, collaborating with the Head of Contracts and exercising independent judgment when escalating concerns.CLM Optimization: Act as a power user for the Ironclad CLM; pinpoint opportunities to automate workflows, enhance template modularity, and streamline the quote-to-contract lifecycle.Process Improvement: Lead the design, implementation, and communication of process enhancements, enforce existing contracting rules, and provide a clear channel for resolving legal inquiries.
Full-time|$160K/yr - $210K/yr|On-site|Sunnyvale, CA
About Our Team:At Taara, a visionary initiative from X, Google's Moonshot Factory, we are dedicated to connecting billions who currently lack reliable and affordable internet access. Our innovative approach harnesses the power of light to provide faster, more cost-effective, and dependable connectivity solutions. Join us in revolutionizing the future of wireless optical communication and photonics chip technology as we strive to bridge the digital divide for communities around the globe.Role Overview:We are in search of a talented Linux System Software Engineer to pioneer the development of the next-generation operating system for wireless broadband networks.The successful candidate will bring extensive experience in building system-level software within a Linux environment, adept in multi-threaded programming, state-machine and event-driven programming, and familiar with network monitoring protocols (such as gRPC, SNMP, OpenTelemetry). Proficiency in integrating with cloud-based back-end systems, along with strong skills in data structures, algorithms, and programming languages including Golang, Python, and C/C++ for development and unit/regression testing will be crucial for this role.Your Impact:Design and implement robust system-level software applications for IoT/network devices.Collaborate with cloud/backend engineers to develop visualization and troubleshooting tools.Stay informed about emerging IoT device trends and technologies.Work closely with engineers and operators to establish telemetry and monitoring solutions.Diagnose and resolve challenges encountered during large-scale field deployments.Qualifications:Bachelor's degree in Computer Science, Computer Networking, Electrical Engineering, or a related discipline.5+ years of experience in application development on Linux-based operating systems, particularly within telecommunications, ISP, or networking sectors.Proficiency in programming languages Golang, Python, and C/C++ for both development and testing.Familiarity with streaming and monitoring protocols and strategies for service provider networks (e.g., gRPC, OpenTelemetry, SNMP).Experience with multi-threaded programming techniques, state-machine design, and event-driven programming.
At DigitalFish, we are committed to empowering our clients by delivering cutting-edge technologies that revolutionize digital media creation and consumption for millions of users.We collaborate with top-tier digital media organizations, positioning ourselves at the forefront of their initiatives to develop innovative platforms and immersive experiences. Our esteemed clients include industry giants such as Apple, Google, Meta, Disney, DreamWorks, Activision, Technicolor, ESPN, LEGO, NASA, and many more.YOUR ROLEAs a vital member of our agile team, you will be instrumental in advancing imaging technology for camera capture and crafting augmented reality experiences that enrich human perception. Your projects will involve camera color processing, enhancing features related to human color perception, and utilizing tools like Unity and Blender for object rendering.
Full-time|$120K/yr - $190K/yr|On-site|Sunnyvale, California, United States
About Applied IntuitionFounded in 2017 and currently valued at $15 billion, Applied Intuition, Inc. is at the forefront of physical AI, developing the digital infrastructure essential for infusing intelligence into every moving machine worldwide. Our solutions cater to diverse sectors, including automotive, defense, trucking, construction, mining, and agriculture, focusing on tools and infrastructure, operating systems, and autonomy. With 18 of the top 20 global automakers and the U.S. military among our trusted clients, we are dedicated to delivering exceptional physical intelligence solutions. Our headquarters are located in Sunnyvale, California, with additional offices in Washington, D.C.; San Diego; Fort Walton Beach, Florida; Ann Arbor, Michigan; London; Stuttgart; Munich; Stockholm; Bangalore; Seoul; and Tokyo. Discover more at applied.co.At Applied Intuition, we thrive on in-office collaboration and expect our employees to work from their office 5 days a week. We appreciate the need for flexibility and empower our team members to manage their schedules responsibly, which may include occasional remote work, starting the day with remote meetings, or adjusting hours for personal commitments.About the RoleWe are seeking a skilled GTM Systems Administrator to join our Revenue Strategy & Operations team and enhance the technical framework of our go-to-market technology stack, with a primary emphasis on Salesforce. In this pivotal position, you will report to the Director of Revenue Operations & Strategy and collaborate closely with various teams including sales, marketing, finance, legal, and customer success to craft and implement scalable solutions that drive our growth.Your role will bridge business strategy and technical execution, transforming complex go-to-market requirements into effective system configurations and integrations. This is a high-impact position with a direct influence on customer engagement and revenue generation.At Applied Intuition, You Will:Design, develop, and manage both standard and custom Salesforce configurations, including Apex classes, triggers, and Lightning Web Components.
Full-time|$190K/yr - $230K/yr|On-site|Sunnyvale, CA
Cerebras Systems is at the forefront of AI innovation, creating the world's largest AI chip, which is 56 times larger than traditional GPUs. Our pioneering wafer-scale architecture delivers exceptional AI computational power equivalent to dozens of GPUs on a single chip, offering users unparalleled simplicity and efficiency. This unique approach enables us to provide industry-leading training and inference speeds, allowing machine learning practitioners to run extensive ML applications seamlessly without the complexities of managing multiple GPUs or TPUs.Our clientele includes renowned model labs, leading global enterprises, and innovative AI-first startups. Recently, OpenAI announced a multi-year collaboration with Cerebras, leveraging 750 megawatts of scale to revolutionize critical workloads with ultra-high-speed inference.Thanks to our cutting-edge wafer-scale technology, Cerebras Inference offers the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than typical GPU-based hyperscale cloud services. This significant speed enhancement is reshaping the user experience in AI applications, enabling real-time iteration and enhancing intelligence through advanced computation.About The RoleAs a Senior Mechanical Engineer at Cerebras, you will spearhead the design of innovative mechanical systems for our next-generation wafer-scale engine. Your key responsibilities will encompass ensuring adherence to specifications, validating manufacturability, and delivering high-quality products in a dynamic environment, addressing some of the most intricate challenges in the rapidly advancing AI landscape.In this role, you will be instrumental in developing the mechanical infrastructure for Cerebras' custom hardware systems.Rapidly iterate on designs and analyses to inform high-level systems decisions and guide the overall product strategy.Provide extensive support for environmental and performance testing on hardware, validate analyses, and ensure compliance with design criteria.Take ownership of technical deliverables.Conduct first-article inspections and functional analyses, identifying and resolving issues as they arise.Collaborate closely with design, manufacturing, production, diagnostics, and embedded software engineering teams, contractors, and suppliers.Perform detailed structural analyses and simulations to optimize designs.
Join our dynamic team as a Part-time Visual Merchandiser in Sunnyvale, CA. In this role, you will be responsible for creating visually appealing displays that attract customers and enhance their shopping experience. Your creativity and attention to detail will play a pivotal role in embodying our brand's aesthetic and driving sales.
Full-time|$150K/yr - $270K/yr|On-site|Sunnyvale, CA
Cerebras Systems is revolutionizing the world of artificial intelligence with our groundbreaking wafer-scale architecture, which is 56 times larger than traditional GPUs. Our innovative design provides unparalleled AI compute power, allowing users to run extensive machine learning applications effortlessly, without the complexities of managing multiple GPUs or TPUs.Our esteemed clientele includes leading model labs, major global enterprises, and pioneering AI startups. Recently, OpenAI announced a multi-year partnership with Cerebras to deploy 750 megawatts of transformative computing power, enabling ultra-fast inference for critical workloads.With our advanced wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than GPU-based hyperscale cloud services. This leap in performance is redefining the user experience for AI applications, facilitating real-time iteration and enhancing intelligence through additional agentic computation.About The RoleJoin our dedicated physical design team as a 3D Physical Design Engineer, where you will focus on the design and analysis of 3D integrated products. This role requires a blend of traditional ASIC/SoC physical design expertise, along with skills in packaging, power management, clock distribution, and thermal analysis. Collaborating closely with the architecture and RTL teams, you will contribute to research and development efforts on innovative concepts for 3D integration.
Cerebras Systems is at the forefront of AI innovation, creating the world’s largest AI chip, which is 56 times the size of conventional GPUs. Our unique wafer-scale architecture not only provides the power equivalent to dozens of GPUs on a single chip but does so with the simplicity of programming a single device. This cutting-edge approach allows us to achieve unparalleled training and inference speeds, enabling machine learning professionals to seamlessly execute large-scale ML applications without the complexity of managing multiple GPUs or TPUs.We proudly serve a diverse clientele, including leading model labs, renowned global enterprises, and pioneering AI-native startups. Notably, OpenAI has recently announced a multi-year partnership with us to leverage our technology in deploying 750 megawatts of scale, revolutionizing key workloads with ultra-fast inference capabilities.Our groundbreaking wafer-scale architecture empowers Cerebras Inference to deliver the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than traditional GPU-based hyperscale cloud inference services. This significant leap in speed redefines the user experience of AI applications, facilitating real-time iteration and enhancing intelligence through advanced computation.
Cerebras Systems is at the forefront of AI innovation, creating the world's largest AI chip that is 56 times larger than traditional GPUs. Our unique wafer-scale architecture delivers the computational power of numerous GPUs on a single chip, simplifying programming while providing unparalleled training and inference speeds. This revolutionary approach enables users to run extensive machine learning applications effortlessly, eliminating the complexity of managing multiple GPUs or TPUs.Cerebras serves a diverse clientele, including leading model labs, major global enterprises, and pioneering AI-native startups. Recently, OpenAI announced a multi-year partnership with Cerebras, aiming to deploy 750 megawatts of scale that will redefine key workloads with ultra-high-speed inference.Our groundbreaking wafer-scale architecture ensures that Cerebras Inference provides the fastest Generative AI inference solution globally, achieving speeds that are over ten times faster than GPU-based hyperscale cloud services. This significant enhancement in performance is transforming the user experience of AI applications, facilitating real-time iteration and boosting intelligence through enhanced computational capabilities.About The RoleWe are seeking a Senior Performance Analyst to join our dynamic Product team. As a specialist in state-of-the-art inference performance, you will be the go-to expert on how Cerebras measures up against alternative inference providers in terms of pricing and performance. This role combines performance benchmarking from foundational principles with competitive intelligence. The position revolves around two key pillars:Performance BenchmarkingYou will develop, execute, and sustain reproducible benchmarks that assess Cerebras inference performance for actual customer workloads. This includes metrics such as tokens per second, time to first token, latency under concurrency, and total cost of ownership (TCO).Competitive AnalysisYou will analyze market trends and competitor offerings to position Cerebras effectively within the inference landscape.
About the Institute of Foundation ModelsWe are a pioneering research laboratory focused on developing, understanding, utilizing, and managing foundation models. Our mission is to propel research, cultivate the next generation of AI innovators, and create transformative impacts within a knowledge-driven economy.Join our dynamic team and seize the opportunity to engage in groundbreaking foundation model training, collaborating with elite researchers, data scientists, and engineers to address the most pressing challenges in AI development. You will contribute to the creation of innovative AI solutions with the potential to revolutionize industries. Your strategic and creative problem-solving abilities will play a crucial role in establishing MBZUAI as a global center for high-performance computing in deep learning, fostering discoveries that will motivate future AI trailblazers.The RoleAs a Machine Learning Engineer at the Institute of Foundation Models, your main duty will be to design and implement cutting-edge machine learning models that tackle real-world issues, pushing the limits of artificial intelligence research. You will work collaboratively with diverse teams to deploy scalable solutions, furthering MBZUAI’s goal of driving significant AI advancements and solidifying the institution’s status as a leader in the international AI research community. Your expertise will be vital in enhancing the performance of large-scale machine learning models and aiding in the development of transformative AI tools that can reshape industries globally.
Full-time|$175K/yr - $275K/yr|On-site|Sunnyvale, CA
Cerebras Systems is a pioneer in AI technology, renowned for creating the world’s largest AI chip, which is an astounding 56 times larger than traditional GPUs. Our innovative wafer-scale architecture provides unparalleled AI computing capabilities equivalent to dozens of GPUs on a single chip, while ensuring the programming simplicity of a single device. This unique approach enables Cerebras to achieve unmatched training and inference speeds, allowing machine learning practitioners to seamlessly execute large-scale ML applications without the complexity of managing vast arrays of GPUs or TPUs. Cerebras' impressive clientele includes leading model laboratories, global enterprises, and cutting-edge AI-focused startups. Recently, OpenAI announced a multi-year partnership with Cerebras, enhancing transformative workloads through ultra-high-speed inference utilizing 750 megawatts of scale. With our groundbreaking wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than GPU-based hyperscale cloud inference services. This significant speed enhancement is revolutionizing the user experience in AI applications, enabling real-time iterations and boosting intelligence through advanced computational capabilities.About The RoleAs the Lead RTL Design Engineer, you will play a pivotal role in our exceptional team responsible for designing and developing the next iterations of the Cerebras Wafer Scale Engine (WSE). This position demands extensive expertise in RTL design and integration, with a strong emphasis on delivering high-performance, power-efficient, and scalable solutions. Additionally, you will oversee collaboration with external ASIC vendors and work closely with design verification, physical design, software, and system teams to translate innovative semiconductor architectures from concept to production, addressing the unique challenges associated with building WSE systems.
Full-time|On-site|Sunnyvale, California, United States
About Applied IntuitionApplied Intuition, Inc. is at the forefront of advancing physical AI technology. Established in 2017 and currently valued at $15 billion, this Silicon Valley powerhouse is dedicated to developing the digital infrastructure necessary to infuse intelligence into every mobile machine globally. Applied Intuition serves various sectors, including automotive, defense, trucking, construction, mining, and agriculture, focusing on three main areas: tools and infrastructure, operating systems, and autonomy. Our solutions are trusted by 18 of the top 20 global automakers, as well as the U.S. military and its allies, to deliver cutting-edge physical intelligence. Headquartered in Sunnyvale, California, we also have offices in Washington, D.C.; San Diego; Ft. Walton Beach, Florida; Ann Arbor, Michigan; London; Stuttgart; Munich; Stockholm; Bangalore; Seoul; and Tokyo. Discover more at applied.co.We prioritize in-office collaboration, expecting our employees to work from their Applied Intuition office five days a week. However, we appreciate the significance of flexibility and trust our team to manage their schedules responsibly. This may include occasional remote work, starting the day with morning meetings from home, or adjusting hours for family commitments.About the RoleJoin Applied Intuition's Self-Driving System (SDS) for Mining & Construction, where we are building a modular, production-grade autonomy and operator-assist stack for construction and mining vehicles. We are seeking talented software engineers to create the tooling and infrastructure that enables rapid development, testing, and deployment of autonomous systems at scale. If you are passionate about tackling complex systems and making a tangible impact, this is where architecture converges with autonomy.Your ResponsibilitiesDesign and develop scalable Python-based tooling and infrastructure to bolster development across the autonomy stack.Create systems that facilitate fleet management and vehicle communication.Contribute to the architecture of scalable systems that accommodate various vehicle platforms and operating environments.Collaborate closely with engineers across different domains.