Entry Level Compiler Engineer Llvm At Cerebras Systems Sunnyvale Ca jobs in Sunnyvale – Browse 1,109 openings on RoboApply Jobs

Entry Level Compiler Engineer Llvm At Cerebras Systems Sunnyvale Ca jobs in Sunnyvale

Open roles matching “Entry Level Compiler Engineer Llvm At Cerebras Systems Sunnyvale Ca” with location signals for Sunnyvale. 1,109 active listings on RoboApply Jobs.

1,109 jobs found

1 - 20 of 1,109 Jobs
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI innovation, creating the world’s largest AI chip, which is 56 times the size of conventional GPUs. Our unique wafer-scale architecture not only provides the power equivalent to dozens of GPUs on a single chip but does so with the simplicity of programming a single device. This cutting-edge approach allows us to achi…

Feb 17, 2026
Apply
Cerebras Systems logo
LLVM Compiler Engineer

Cerebras Systems

Full-time|On-site|Sunnyvale, CA; Toronto, Ontario, Canada

Cerebras Systems is revolutionizing the AI landscape with the world’s largest AI chip, which is 56 times larger than traditional GPUs. Our innovative wafer-scale architecture empowers AI compute capabilities equivalent to dozens of GPUs on a single chip while maintaining the programming ease of a singular device. This groundbreaking approach enables Cerebras to provide unmatched training and inference speeds, allowing machine learning practitioners to seamlessly execute large-scale ML applications without the complexity of managing multiple GPUs or TPUs. Cerebras proudly supports leading model labs, global enterprises, and pioneering AI-native startups. Notably, OpenAI has recently forged a multi-year partnership with Cerebras, committing to leverage 750 megawatts of power to enhance essential workloads with ultra-fast inference capabilities. Thanks to our state-of-the-art wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, exceeding the speed of GPU-based hyperscale cloud inference services by over tenfold. This substantial speed enhancement is transforming user experiences with AI applications, facilitating real-time iterations and augmenting intelligence through advanced agentic computation. Location Options: Sunnyvale, Toronto, or Vancouver About the Role We are on the lookout for a Compiler Engineer to innovate and implement new functionalities in our low-level compiler toolchain, which encompasses the compiler mid-end, backend, assembler, and linker, specifically targeting the individual cores within this unique architecture. Your primary responsibilities will include working within the compiler's infrastructure to enhance performance and efficiency across various applications.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI technology, creating the world's largest AI chip—56 times the size of traditional GPUs. Our innovative wafer-scale architecture delivers the computational power of dozens of GPUs on a single chip while simplifying programming to the ease of a single device. This groundbreaking approach enables us to achieve unparalleled training and inference speeds, empowering machine learning professionals to seamlessly execute large-scale ML applications without the complexities of managing numerous GPUs or TPUs.Cerebras serves an impressive clientele that includes top model laboratories, multinational corporations, and pioneering AI-native startups. Recently, OpenAI announced a multi-year partnership with Cerebras, deploying 750 megawatts of scale to revolutionize critical workloads with ultra-high-speed inference.Our wafer-scale architecture also powers the fastest Generative AI inference solution globally, exceeding GPU-based hyperscale cloud inference services by over 10 times. This significant enhancement in speed is transforming the user experience of AI applications, facilitating real-time iteration and boosting intelligence through additional computational capabilities.About The RoleWe are in search of a talented Compiler Engineer to contribute to the design and implementation of new features within our CSL (Cerebras Software Language) compiler. CSL is a Zig-like programming language utilized both internally and externally to program our wafer-scale engine (WSE).The language offers high-level abstractions to simplify programming the wafer WSE while providing low-level access to hardware internals for optimal hardware utilization. The compiler leverages MLIR infrastructure to translate CSL into LLVM IR, which is further compiled by a dedicated LLVM mid-end/backend into executable files.ResponsibilitiesDesign and implement front-end language features, semantic analysis, intermediate representations, and lowering pipelines from CSL to MLIR dialect(s) and LLVM IR.Develop and enhance abstraction layers between the CSL language and the underlying hardware.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|$150K/yr - $260K/yr|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI technology, engineering the world’s largest AI chip, which is 56 times larger than conventional GPUs. Our innovative wafer-scale architecture enables unprecedented AI computational power, equivalent to dozens of GPUs operating as a single unit, thereby simplifying programming for machine learning tasks. This revolutionary approach not only provides unmatched training and inference speeds but also allows users to execute large-scale machine learning applications without the complexity of managing multiple GPUs or TPUs.Our esteemed clientele includes leading model laboratories, global corporations, and pioneering AI startups. Recently, OpenAI announced a multi-year collaboration with Cerebras, deploying 750 megawatts of processing capacity that revolutionizes critical workloads through ultra-fast inference.With our groundbreaking wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, outperforming GPU-based hyperscale cloud services by over 10 times. This significant increase in speed is reshaping the user experience of AI applications, facilitating real-time iterations and enhancing intelligence through advanced computational capabilities.

Feb 19, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Cerebras Systems is pioneering the realm of artificial intelligence with the world’s largest AI chip, boasting a size 56 times greater than traditional GPUs. Our innovative wafer-scale architecture delivers the computational power of dozens of GPUs on a single chip while maintaining the programming simplicity of a single device. This unique approach enables us to achieve unparalleled training and inference speeds, allowing machine learning professionals to seamlessly execute large-scale ML applications without the complexities of managing numerous GPUs or TPUs.Our clientele includes leading model laboratories, global enterprises, and avant-garde AI-native startups. Recently, OpenAI formed a multi-year partnership with Cerebras to harness 750 megawatts of scale, revolutionizing essential workloads with ultra high-speed inference.Thanks to our groundbreaking wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, exceeding GPU-based hyperscale cloud inference services by over ten times. This significant enhancement in speed is transforming the user experience of AI applications, facilitating real-time iteration and augmenting intelligence via enhanced agentic computation.About The RoleWe are seeking a dynamic Head of IT to establish and manage the internal technology infrastructure of a rapidly scaling organization operating at the forefront of AI hardware and software. This is not a conventional IT leadership position; it is a build-and-scale opportunity for someone who thrives in a dynamic environment.You will oversee the systems that Cerebras employees, contractors, and executives depend on daily, including laptops, identity management, SaaS, networking, collaboration tools, endpoint security, internal support, and essential IT controls necessary for a company of our maturity. You will ensure that our highly technical and fast-paced engineering workforce remains unimpeded while simultaneously fortifying the environment to meet the standards expected of a company at our stage, including SOX-grade ITGCs and SOC 2 compliance.

Apr 9, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Cerebras Systems is pioneering the future of artificial intelligence with the development of the world's largest AI chip, which is an astonishing 56 times bigger than conventional GPUs. Our innovative wafer-scale architecture enables the AI computational power equivalent to dozens of GPUs on a single chip, while maintaining programming simplicity akin to that of a single device. This state-of-the-art approach allows us to deliver unparalleled training and inference speeds, enabling machine learning practitioners to seamlessly execute large-scale ML applications without the complexities of managing multiple GPUs or TPUs.We proudly serve a diverse clientele, including leading model labs, multinational corporations, and innovative AI-native startups. Notably, OpenAI recently announced a multi-year partnership with Cerebras to harness 750 megawatts of scale, revolutionizing critical workloads with ultra-high-speed inference.With our groundbreaking wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, achieving speeds over ten times faster than GPU-based hyperscale cloud inference services. This remarkable enhancement in speed is transforming the user experience for AI applications, unlocking real-time iteration and enriching intelligence through enhanced computational capabilities.About The RoleAs a Network Architect on the Cluster Architecture Team, you will collaborate closely with vendors, internal networking teams, and industry experts to create top-tier interconnect architecture for both current and future generations of Cerebras AI clusters. Your responsibilities will include developing proof-of-concept designs for new network features that promote a resilient and reliable network tailored for AI workloads. This role demands cross-functional collaboration and engagement with a variety of hardware components, including network devices and the Wafer-Scale Engine, as well as software across multiple layers of the stack, from host-side networking to cluster-level coordination. A strong understanding of network monitoring systems and debugging methodologies is essential.ResponsibilitiesDesign AI/ML and HPC Clusters.Identify and mitigate performance or efficiency bottlenecks, ensuring optimal resource utilization, low latency, and high-throughput communication.Lead technical projects involving multiple teams and diverse software and hardware components to realize advanced network solutions.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|$190K/yr - $230K/yr|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI innovation, creating the world's largest AI chip, which is 56 times larger than traditional GPUs. Our pioneering wafer-scale architecture delivers exceptional AI computational power equivalent to dozens of GPUs on a single chip, offering users unparalleled simplicity and efficiency. This unique approach enables us to provide industry-leading training and inference speeds, allowing machine learning practitioners to run extensive ML applications seamlessly without the complexities of managing multiple GPUs or TPUs.Our clientele includes renowned model labs, leading global enterprises, and innovative AI-first startups. Recently, OpenAI announced a multi-year collaboration with Cerebras, leveraging 750 megawatts of scale to revolutionize critical workloads with ultra-high-speed inference.Thanks to our cutting-edge wafer-scale technology, Cerebras Inference offers the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than typical GPU-based hyperscale cloud services. This significant speed enhancement is reshaping the user experience in AI applications, enabling real-time iteration and enhancing intelligence through advanced computation.About The RoleAs a Senior Mechanical Engineer at Cerebras, you will spearhead the design of innovative mechanical systems for our next-generation wafer-scale engine. Your key responsibilities will encompass ensuring adherence to specifications, validating manufacturability, and delivering high-quality products in a dynamic environment, addressing some of the most intricate challenges in the rapidly advancing AI landscape.In this role, you will be instrumental in developing the mechanical infrastructure for Cerebras' custom hardware systems.Rapidly iterate on designs and analyses to inform high-level systems decisions and guide the overall product strategy.Provide extensive support for environmental and performance testing on hardware, validate analyses, and ensure compliance with design criteria.Take ownership of technical deliverables.Conduct first-article inspections and functional analyses, identifying and resolving issues as they arise.Collaborate closely with design, manufacturing, production, diagnostics, and embedded software engineering teams, contractors, and suppliers.Perform detailed structural analyses and simulations to optimize designs.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|$150K/yr - $270K/yr|On-site|Sunnyvale, CA

Cerebras Systems is revolutionizing the world of artificial intelligence with our groundbreaking wafer-scale architecture, which is 56 times larger than traditional GPUs. Our innovative design provides unparalleled AI compute power, allowing users to run extensive machine learning applications effortlessly, without the complexities of managing multiple GPUs or TPUs.Our esteemed clientele includes leading model labs, major global enterprises, and pioneering AI startups. Recently, OpenAI announced a multi-year partnership with Cerebras to deploy 750 megawatts of transformative computing power, enabling ultra-fast inference for critical workloads.With our advanced wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than GPU-based hyperscale cloud services. This leap in performance is redefining the user experience for AI applications, facilitating real-time iteration and enhancing intelligence through additional agentic computation.About The RoleJoin our dedicated physical design team as a 3D Physical Design Engineer, where you will focus on the design and analysis of 3D integrated products. This role requires a blend of traditional ASIC/SoC physical design expertise, along with skills in packaging, power management, clock distribution, and thermal analysis. Collaborating closely with the architecture and RTL teams, you will contribute to research and development efforts on innovative concepts for 3D integration.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|$175K/yr - $275K/yr|On-site|Sunnyvale, CA

Cerebras Systems is a pioneer in AI technology, renowned for creating the world’s largest AI chip, which is an astounding 56 times larger than traditional GPUs. Our innovative wafer-scale architecture provides unparalleled AI computing capabilities equivalent to dozens of GPUs on a single chip, while ensuring the programming simplicity of a single device. This unique approach enables Cerebras to achieve unmatched training and inference speeds, allowing machine learning practitioners to seamlessly execute large-scale ML applications without the complexity of managing vast arrays of GPUs or TPUs. Cerebras' impressive clientele includes leading model laboratories, global enterprises, and cutting-edge AI-focused startups. Recently, OpenAI announced a multi-year partnership with Cerebras, enhancing transformative workloads through ultra-high-speed inference utilizing 750 megawatts of scale. With our groundbreaking wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than GPU-based hyperscale cloud inference services. This significant speed enhancement is revolutionizing the user experience in AI applications, enabling real-time iterations and boosting intelligence through advanced computational capabilities.About The RoleAs the Lead RTL Design Engineer, you will play a pivotal role in our exceptional team responsible for designing and developing the next iterations of the Cerebras Wafer Scale Engine (WSE). This position demands extensive expertise in RTL design and integration, with a strong emphasis on delivering high-performance, power-efficient, and scalable solutions. Additionally, you will oversee collaboration with external ASIC vendors and work closely with design verification, physical design, software, and system teams to translate innovative semiconductor architectures from concept to production, addressing the unique challenges associated with building WSE systems.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale CA or Toronto Canada

Cerebras Systems is revolutionizing the AI industry by developing the world’s largest AI chip, 56 times the size of traditional GPUs. Our innovative wafer-scale architecture delivers the computational power of dozens of GPUs on a single chip, while simplifying programming to the ease of a single device. This unique approach enables Cerebras to achieve unmatched training and inference speeds, empowering machine learning professionals to seamlessly operate large-scale ML applications without the complexities of managing multiple GPUs or TPUs. Our esteemed clientele includes leading model labs, global corporations, and pioneering AI-native startups. Recently, OpenAI announced a multi-year partnership with Cerebras, aiming to leverage 750 megawatts of scale to transform critical workloads through ultra high-speed inference. With our groundbreaking wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than GPU-based hyperscale cloud inference services. This unprecedented speed enhances the user experience of AI applications, enabling real-time iterations and increased intelligence through advanced computation capabilities. About The RoleThe AI Infrastructure Operations Engineer (SiteOps) is an entry-level position focusing on the deployment, initialization, monitoring, and first-response troubleshooting of Cerebras AI infrastructure within data center settings. This role plays a critical part in supporting Cerebras systems, cluster server hardware, networking hardware, and monitoring tools.Your responsibilities will include ensuring the reliable operation and scalability of Cerebras AI clusters by executing established hardware initialization and validation protocols, monitoring telemetry data, performing initial troubleshooting, and escalating issues according to predefined workflows.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale CA or Toronto Canada

Cerebras Systems revolutionizes the AI landscape with the creation of the world’s largest AI chip, a remarkable 56 times larger than conventional GPUs. Our innovative wafer-scale architecture delivers the computational power of numerous GPUs on a single chip, simplifying programming efforts for users. This unique approach enables Cerebras to achieve unparalleled training and inference speeds, empowering machine learning practitioners to seamlessly execute large-scale ML applications without the complexities of managing hundreds of GPUs or TPUs.Our clientele includes leading model laboratories, global enterprises, and pioneering AI-native startups. Notably, OpenAI recently announced a multi-year partnership with Cerebras to deploy 750 megawatts of scale, significantly enhancing key workloads with ultra-high-speed inference.Thanks to our groundbreaking wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, exceeding the performance of GPU-based hyperscale cloud inference services by over ten times. This significant speed enhancement transforms the user experience of AI applications, facilitating real-time iterations and augmented intelligence through additional agentic computation.About The RoleWe are on the lookout for a highly skilled and experienced AI Infrastructure Operations Engineer to oversee and manage our state-of-the-art machine learning compute clusters. In this role, you will have the unique opportunity to work with the world’s largest computer chip, the Wafer-Scale Engine (WSE), and the systems that leverage its extraordinary power.You will play a pivotal role in ensuring the health, performance, and availability of our infrastructure, maximizing compute capacity, and supporting our expanding AI initiatives. This position requires an in-depth understanding of Linux-based systems, expertise in containerization technologies, and experience in monitoring and troubleshooting complex distributed systems. The ideal candidate is a proactive problem-solver with a strong background in large-scale compute infrastructure who is reliable and committed to customer success.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI innovation, creating the world's largest AI chip that is 56 times larger than traditional GPUs. Our unique wafer-scale architecture delivers the computational power of numerous GPUs on a single chip, simplifying programming while providing unparalleled training and inference speeds. This revolutionary approach enables users to run extensive machine learning applications effortlessly, eliminating the complexity of managing multiple GPUs or TPUs.Cerebras serves a diverse clientele, including leading model labs, major global enterprises, and pioneering AI-native startups. Recently, OpenAI announced a multi-year partnership with Cerebras, aiming to deploy 750 megawatts of scale that will redefine key workloads with ultra-high-speed inference.Our groundbreaking wafer-scale architecture ensures that Cerebras Inference provides the fastest Generative AI inference solution globally, achieving speeds that are over ten times faster than GPU-based hyperscale cloud services. This significant enhancement in performance is transforming the user experience of AI applications, facilitating real-time iteration and boosting intelligence through enhanced computational capabilities.About The RoleWe are seeking a Senior Performance Analyst to join our dynamic Product team. As a specialist in state-of-the-art inference performance, you will be the go-to expert on how Cerebras measures up against alternative inference providers in terms of pricing and performance. This role combines performance benchmarking from foundational principles with competitive intelligence. The position revolves around two key pillars:Performance BenchmarkingYou will develop, execute, and sustain reproducible benchmarks that assess Cerebras inference performance for actual customer workloads. This includes metrics such as tokens per second, time to first token, latency under concurrency, and total cost of ownership (TCO).Competitive AnalysisYou will analyze market trends and competitor offerings to position Cerebras effectively within the inference landscape.

Apr 13, 2026
Apply
Taara logo
Full-time|$160K/yr - $210K/yr|On-site|Sunnyvale, CA

About the TeamAt Taara, we are dedicated to transforming the world of connectivity. Originating from X, Google's Moonshot Factory, our mission is to bridge the digital divide by delivering high-speed, affordable internet through innovative light-based technologies. Join us in revolutionizing wireless optical communication and photonics chip technologies as we expand our reach globally.About the RoleWe are seeking a Systems Software Test Engineer to spearhead the automation and validation efforts for our comprehensive systems. This role is pivotal, as it involves not just testing APIs, but also architecting frameworks to ensure our software effectively controls high-precision hardware in real-time. You will play a crucial role in connecting software development with hardware reliability, ensuring that each code commit enhances our network's performance in real-world scenarios.Your Impact:Automated Framework Design: Develop and implement scalable automation frameworks to rigorously test embedded software and cloud integrations.Hardware-in-the-Loop Testing: Create and uphold HIL test benches, enabling real-time software interactions with physical hardware units under simulated field conditions.CI/CD Integration: Lead the incorporation of automated tests into the CI/CD pipeline, establishing quality gates for every software build.Comprehensive Data Path Validation: Collaborate with Cloud and Embedded teams to validate the data flow from the physical optical link to the backend monitoring dashboard.Performance Regression Testing: Develop automated regression suites to identify performance regressions impacting network throughput, latency, and stability.

Feb 5, 2026
Apply
Cerebras Systems logo
Full-time|$200K/yr - $240K/yr|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI innovation, creating the world's largest AI chip that is 56 times larger than conventional GPUs. Our unique wafer-scale architecture combines the computational power of numerous GPUs into a single chip, offering unparalleled programming simplicity. This allows us to deliver exceptional training and inference speeds, enabling machine learning practitioners to seamlessly execute large-scale ML applications without the complexities of managing multiple GPUs or TPUs.We are proud to serve a diverse clientele, including leading model laboratories, global corporations, and pioneering AI-native startups. Notably, OpenAI has recently partnered with Cerebras to leverage our technology, deploying 750 megawatts of scale to revolutionize key workloads through ultra-high-speed inference.With our cutting-edge wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, boasting speeds over 10 times quicker than GPU-based hyperscale cloud inference services. This remarkable enhancement transforms the user experience of AI applications, facilitating real-time iteration and amplifying intelligence through enhanced computational capabilities.Job SummaryThe Sourcing Manager for Critical Components will lead the development and execution of global sourcing strategies aimed at securing high-quality and cost-effective critical components and materials. This position is pivotal in ensuring supply chain continuity, minimizing risks, and fostering innovation through market analysis, supplier relationship management, and advanced negotiation strategies. The manager will collaborate with cross-functional teams to synchronize procurement efforts with organizational objectives, enhance procurement processes, and strengthen supplier partnerships.Key Responsibilities:Strategic Sourcing: Formulate and deploy comprehensive sourcing strategies for critical components that align with long-term business goals and maintain a competitive edge.Supplier Management: Cultivate and maintain robust relationships with key suppliers, conduct routine performance evaluations, and oversee contracts to ensure compliance with terms and mitigation of risks.Cost Optimization: Identify and implement cost-saving initiatives while ensuring quality standards are met.

May 4, 2026
Apply
TaaraConnect logo
Full-time|$160K/yr - $210K/yr|On-site|Sunnyvale, CA

About Our Team:At Taara, a visionary initiative from X, Google's Moonshot Factory, we are dedicated to connecting billions who currently lack reliable and affordable internet access. Our innovative approach harnesses the power of light to provide faster, more cost-effective, and dependable connectivity solutions. Join us in revolutionizing the future of wireless optical communication and photonics chip technology as we strive to bridge the digital divide for communities around the globe.Role Overview:We are in search of a talented Linux System Software Engineer to pioneer the development of the next-generation operating system for wireless broadband networks.The successful candidate will bring extensive experience in building system-level software within a Linux environment, adept in multi-threaded programming, state-machine and event-driven programming, and familiar with network monitoring protocols (such as gRPC, SNMP, OpenTelemetry). Proficiency in integrating with cloud-based back-end systems, along with strong skills in data structures, algorithms, and programming languages including Golang, Python, and C/C++ for development and unit/regression testing will be crucial for this role.Your Impact:Design and implement robust system-level software applications for IoT/network devices.Collaborate with cloud/backend engineers to develop visualization and troubleshooting tools.Stay informed about emerging IoT device trends and technologies.Work closely with engineers and operators to establish telemetry and monitoring solutions.Diagnose and resolve challenges encountered during large-scale field deployments.Qualifications:Bachelor's degree in Computer Science, Computer Networking, Electrical Engineering, or a related discipline.5+ years of experience in application development on Linux-based operating systems, particularly within telecommunications, ISP, or networking sectors.Proficiency in programming languages Golang, Python, and C/C++ for both development and testing.Familiarity with streaming and monitoring protocols and strategies for service provider networks (e.g., gRPC, OpenTelemetry, SNMP).Experience with multi-threaded programming techniques, state-machine design, and event-driven programming.

Dec 18, 2025
Apply
DigitalFish logo
Full-time|On-site|Sunnyvale, CA

At DigitalFish, we are committed to empowering our clients by delivering cutting-edge technologies that revolutionize digital media creation and consumption for millions of users.We collaborate with top-tier digital media organizations, positioning ourselves at the forefront of their initiatives to develop innovative platforms and immersive experiences. Our esteemed clients include industry giants such as Apple, Google, Meta, Disney, DreamWorks, Activision, Technicolor, ESPN, LEGO, NASA, and many more.YOUR ROLEAs a vital member of our agile team, you will be instrumental in advancing imaging technology for camera capture and crafting augmented reality experiences that enrich human perception. Your projects will involve camera color processing, enhancing features related to human color perception, and utilizing tools like Unity and Blender for object rendering.

Oct 10, 2025
Apply
Comtech LLC logo
Contract|On-site|Sunnyvale

Job Title: System AdministratorLocation: Sunnyvale, CAKey Responsibilities:Manage and administer Active Directory and Windows infrastructure.Oversee Office 365 and Exchange email administration.Diagnose and resolve issues related to server hardware and operating systems.Install, configure, and maintain server infrastructure and associated equipment.Support and maintain Data Center operations.Essential Skills and Experience:Profound knowledge of Active Directory and Windows Infrastructure.Proven experience in building Active Directory Domains and Forests.Experience upgrading Active Directory Domains (from 2008 to 2012).Extensive implementation experience with Windows Infrastructure components such as ADFS, WSUS, CA, NPS, DNS, and DHCP, with CA and NPS experience considered a plus.Minimum of 5 years of experience with Windows Server Operating Systems.At least 3 years of experience with VMware vSphere.Proficient in PowerShell scripting.Strong understanding of DNS, DHCP, IIS, Group Policy, and WSUS.Hands-on experience with Dell Server Hardware, preferably Dell Blades, including DRAC and OpenManage.

Sep 7, 2017
Apply
Cerebras Systems logo
Full-time|$140K/yr - $240K/yr|On-site|Sunnyvale, CA

At Cerebras Systems, we are pioneering the future of artificial intelligence with the development of the world's largest AI chip, which is an astonishing 56 times larger than traditional GPUs. Our innovative wafer-scale architecture combines the computational power of numerous GPUs into a single chip, simplifying programming and enhancing efficiency. This unique approach enables us to achieve unparalleled training and inference speeds, empowering machine learning practitioners to run extensive ML applications seamlessly, without the complexities of juggling multiple GPUs or TPUs.Our clientele includes leading model labs, global corporations, and groundbreaking AI-focused startups. Notably, OpenAI has recently partnered with Cerebras to harness 750 megawatts of scale, revolutionizing critical workloads with ultra-fast inference capabilities.Thanks to our cutting-edge wafer-scale technology, Cerebras Inference delivers the fastest Generative AI inference solutions available, exceeding GPU-based hyperscale cloud services by over ten times. This significant leap in speed is revolutionizing user interactions with AI applications, facilitating real-time adjustments and enhancing intelligence through advanced computational capabilities.About The RoleAs the security lead for Cerebras's AI cluster product, you will be at the forefront of ensuring the security of our large-scale AI clusters, which consist of hundreds of wafer-scale accelerator systems, thousands of high-performance servers, and numerous networking ports, including switches. This will also involve managing network-attached storage within a vast data center.Your primary responsibility will be to implement security measures based on established best practices and first principles, ensuring the protection of Cerebras's extensive AI clusters. These clusters comprise intricate hardware components, networking systems, and a fully integrated cluster management software stack that ranges from bare-metal deployments to sophisticated management systems that enable multi-tenant training and inference services across these expansive clusters.You will focus on guaranteeing end-to-end security and privacy for various cluster applications, developing security engineering solutions incorporating robust network access controls, user access management, and an exceptional multi-tenancy framework.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|$190K/yr - $230K/yr|On-site|Sunnyvale, CA

Cerebras Systems has revolutionized the AI landscape by developing the world's largest AI chip, which is 56 times larger than traditional GPUs. Our innovative wafer-scale architecture allows for the unparalleled AI computational power equivalent to dozens of GPUs on a single chip, while maintaining the programming simplicity of a single device. This unique approach enables us to deliver exceptional training and inference speeds, empowering machine learning practitioners to efficiently execute large-scale ML applications without the complexity of managing numerous GPUs or TPUs.Our esteemed clientele includes leading model laboratories, major global enterprises, and pioneering AI-focused startups. Recently, OpenAI announced a multi-year partnership with Cerebras, aimed at deploying 750 megawatts of scale and transforming critical workloads with ultra-high-speed inference.With our groundbreaking wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, boasting speeds over ten times faster than GPU-based hyperscale cloud inference services. This monumental leap in speed significantly enhances the user experience of AI applications, facilitating real-time iteration and amplifying intelligence through advanced computation capabilities.Role Overview As a Software Automation Engineer at Cerebras Systems, you will be instrumental in designing and implementing innovative software solutions that enhance operational efficiency and streamline business processes. Your responsibilities will include developing automation frameworks, tools, and applications aimed at reducing manual tasks, boosting system reliability, and fostering scalable growth across the organization. In this position, you will work collaboratively with cross-functional teams—including engineers, analysts, and business stakeholders—to identify workflow challenges and explore opportunities for automation. Your contributions may involve building advanced process automation systems, developing real-time monitoring and alerting functionalities, integrating disparate systems, and crafting data-driven solutions to optimize overall performance. Your efforts will be pivotal in eliminating bottlenecks, lowering operational costs, and enabling teams to focus on innovative projects.

Mar 5, 2026
Apply
ifm-us logo
Full-time|On-site|Sunnyvale, CA

About the Institute of Foundation ModelsWe are a pioneering research laboratory focused on developing, understanding, utilizing, and managing foundation models. Our mission is to propel research, cultivate the next generation of AI innovators, and create transformative impacts within a knowledge-driven economy.Join our dynamic team and seize the opportunity to engage in groundbreaking foundation model training, collaborating with elite researchers, data scientists, and engineers to address the most pressing challenges in AI development. You will contribute to the creation of innovative AI solutions with the potential to revolutionize industries. Your strategic and creative problem-solving abilities will play a crucial role in establishing MBZUAI as a global center for high-performance computing in deep learning, fostering discoveries that will motivate future AI trailblazers.The RoleAs a Machine Learning Engineer at the Institute of Foundation Models, your main duty will be to design and implement cutting-edge machine learning models that tackle real-world issues, pushing the limits of artificial intelligence research. You will work collaboratively with diverse teams to deploy scalable solutions, furthering MBZUAI’s goal of driving significant AI advancements and solidifying the institution’s status as a leader in the international AI research community. Your expertise will be vital in enhancing the performance of large-scale machine learning models and aiding in the development of transformative AI tools that can reshape industries globally.

Mar 17, 2025

Sign in to browse more jobs

Create account — see all 1,109 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.