Network Architect At Cerebras Systems Sunnyvale Ca jobs in Sunnyvale – Browse 968 openings on RoboApply Jobs

Network Architect At Cerebras Systems Sunnyvale Ca jobs in Sunnyvale

Open roles matching “Network Architect At Cerebras Systems Sunnyvale Ca” with location signals for Sunnyvale. 968 active listings on RoboApply Jobs.

968 jobs found

1 - 20 of 968 Jobs
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Cerebras Systems is pioneering the future of artificial intelligence with the development of the world's largest AI chip, which is an astonishing 56 times bigger than conventional GPUs. Our innovative wafer-scale architecture enables the AI computational power equivalent to dozens of GPUs on a single chip, while maintaining programming simplicity akin to tha…

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Cerebras Systems is pioneering the realm of artificial intelligence with the world’s largest AI chip, boasting a size 56 times greater than traditional GPUs. Our innovative wafer-scale architecture delivers the computational power of dozens of GPUs on a single chip while maintaining the programming simplicity of a single device. This unique approach enables us to achieve unparalleled training and inference speeds, allowing machine learning professionals to seamlessly execute large-scale ML applications without the complexities of managing numerous GPUs or TPUs.Our clientele includes leading model laboratories, global enterprises, and avant-garde AI-native startups. Recently, OpenAI formed a multi-year partnership with Cerebras to harness 750 megawatts of scale, revolutionizing essential workloads with ultra high-speed inference.Thanks to our groundbreaking wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, exceeding GPU-based hyperscale cloud inference services by over ten times. This significant enhancement in speed is transforming the user experience of AI applications, facilitating real-time iteration and augmenting intelligence via enhanced agentic computation.About The RoleWe are seeking a dynamic Head of IT to establish and manage the internal technology infrastructure of a rapidly scaling organization operating at the forefront of AI hardware and software. This is not a conventional IT leadership position; it is a build-and-scale opportunity for someone who thrives in a dynamic environment.You will oversee the systems that Cerebras employees, contractors, and executives depend on daily, including laptops, identity management, SaaS, networking, collaboration tools, endpoint security, internal support, and essential IT controls necessary for a company of our maturity. You will ensure that our highly technical and fast-paced engineering workforce remains unimpeded while simultaneously fortifying the environment to meet the standards expected of a company at our stage, including SOX-grade ITGCs and SOC 2 compliance.

Apr 9, 2026
Apply
Cerebras Systems logo
Full-time|$150K/yr - $260K/yr|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI technology, engineering the world’s largest AI chip, which is 56 times larger than conventional GPUs. Our innovative wafer-scale architecture enables unprecedented AI computational power, equivalent to dozens of GPUs operating as a single unit, thereby simplifying programming for machine learning tasks. This revolutionary approach not only provides unmatched training and inference speeds but also allows users to execute large-scale machine learning applications without the complexity of managing multiple GPUs or TPUs.Our esteemed clientele includes leading model laboratories, global corporations, and pioneering AI startups. Recently, OpenAI announced a multi-year collaboration with Cerebras, deploying 750 megawatts of processing capacity that revolutionizes critical workloads through ultra-fast inference.With our groundbreaking wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, outperforming GPU-based hyperscale cloud services by over 10 times. This significant increase in speed is reshaping the user experience of AI applications, facilitating real-time iterations and enhancing intelligence through advanced computational capabilities.

Feb 19, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI technology, creating the world's largest AI chip—56 times the size of traditional GPUs. Our innovative wafer-scale architecture delivers the computational power of dozens of GPUs on a single chip while simplifying programming to the ease of a single device. This groundbreaking approach enables us to achieve unparalleled training and inference speeds, empowering machine learning professionals to seamlessly execute large-scale ML applications without the complexities of managing numerous GPUs or TPUs.Cerebras serves an impressive clientele that includes top model laboratories, multinational corporations, and pioneering AI-native startups. Recently, OpenAI announced a multi-year partnership with Cerebras, deploying 750 megawatts of scale to revolutionize critical workloads with ultra-high-speed inference.Our wafer-scale architecture also powers the fastest Generative AI inference solution globally, exceeding GPU-based hyperscale cloud inference services by over 10 times. This significant enhancement in speed is transforming the user experience of AI applications, facilitating real-time iteration and boosting intelligence through additional computational capabilities.About The RoleWe are in search of a talented Compiler Engineer to contribute to the design and implementation of new features within our CSL (Cerebras Software Language) compiler. CSL is a Zig-like programming language utilized both internally and externally to program our wafer-scale engine (WSE).The language offers high-level abstractions to simplify programming the wafer WSE while providing low-level access to hardware internals for optimal hardware utilization. The compiler leverages MLIR infrastructure to translate CSL into LLVM IR, which is further compiled by a dedicated LLVM mid-end/backend into executable files.ResponsibilitiesDesign and implement front-end language features, semantic analysis, intermediate representations, and lowering pipelines from CSL to MLIR dialect(s) and LLVM IR.Develop and enhance abstraction layers between the CSL language and the underlying hardware.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|$190K/yr - $230K/yr|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI innovation, creating the world's largest AI chip, which is 56 times larger than traditional GPUs. Our pioneering wafer-scale architecture delivers exceptional AI computational power equivalent to dozens of GPUs on a single chip, offering users unparalleled simplicity and efficiency. This unique approach enables us to provide industry-leading training and inference speeds, allowing machine learning practitioners to run extensive ML applications seamlessly without the complexities of managing multiple GPUs or TPUs.Our clientele includes renowned model labs, leading global enterprises, and innovative AI-first startups. Recently, OpenAI announced a multi-year collaboration with Cerebras, leveraging 750 megawatts of scale to revolutionize critical workloads with ultra-high-speed inference.Thanks to our cutting-edge wafer-scale technology, Cerebras Inference offers the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than typical GPU-based hyperscale cloud services. This significant speed enhancement is reshaping the user experience in AI applications, enabling real-time iteration and enhancing intelligence through advanced computation.About The RoleAs a Senior Mechanical Engineer at Cerebras, you will spearhead the design of innovative mechanical systems for our next-generation wafer-scale engine. Your key responsibilities will encompass ensuring adherence to specifications, validating manufacturability, and delivering high-quality products in a dynamic environment, addressing some of the most intricate challenges in the rapidly advancing AI landscape.In this role, you will be instrumental in developing the mechanical infrastructure for Cerebras' custom hardware systems.Rapidly iterate on designs and analyses to inform high-level systems decisions and guide the overall product strategy.Provide extensive support for environmental and performance testing on hardware, validate analyses, and ensure compliance with design criteria.Take ownership of technical deliverables.Conduct first-article inspections and functional analyses, identifying and resolving issues as they arise.Collaborate closely with design, manufacturing, production, diagnostics, and embedded software engineering teams, contractors, and suppliers.Perform detailed structural analyses and simulations to optimize designs.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|$150K/yr - $270K/yr|On-site|Sunnyvale, CA

Cerebras Systems is revolutionizing the world of artificial intelligence with our groundbreaking wafer-scale architecture, which is 56 times larger than traditional GPUs. Our innovative design provides unparalleled AI compute power, allowing users to run extensive machine learning applications effortlessly, without the complexities of managing multiple GPUs or TPUs.Our esteemed clientele includes leading model labs, major global enterprises, and pioneering AI startups. Recently, OpenAI announced a multi-year partnership with Cerebras to deploy 750 megawatts of transformative computing power, enabling ultra-fast inference for critical workloads.With our advanced wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than GPU-based hyperscale cloud services. This leap in performance is redefining the user experience for AI applications, facilitating real-time iteration and enhancing intelligence through additional agentic computation.About The RoleJoin our dedicated physical design team as a 3D Physical Design Engineer, where you will focus on the design and analysis of 3D integrated products. This role requires a blend of traditional ASIC/SoC physical design expertise, along with skills in packaging, power management, clock distribution, and thermal analysis. Collaborating closely with the architecture and RTL teams, you will contribute to research and development efforts on innovative concepts for 3D integration.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI innovation, creating the world’s largest AI chip, which is 56 times the size of conventional GPUs. Our unique wafer-scale architecture not only provides the power equivalent to dozens of GPUs on a single chip but does so with the simplicity of programming a single device. This cutting-edge approach allows us to achieve unparalleled training and inference speeds, enabling machine learning professionals to seamlessly execute large-scale ML applications without the complexity of managing multiple GPUs or TPUs.We proudly serve a diverse clientele, including leading model labs, renowned global enterprises, and pioneering AI-native startups. Notably, OpenAI has recently announced a multi-year partnership with us to leverage our technology in deploying 750 megawatts of scale, revolutionizing key workloads with ultra-fast inference capabilities.Our groundbreaking wafer-scale architecture empowers Cerebras Inference to deliver the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than traditional GPU-based hyperscale cloud inference services. This significant leap in speed redefines the user experience of AI applications, facilitating real-time iteration and enhancing intelligence through advanced computation.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI innovation, creating the world's largest AI chip that is 56 times larger than traditional GPUs. Our unique wafer-scale architecture delivers the computational power of numerous GPUs on a single chip, simplifying programming while providing unparalleled training and inference speeds. This revolutionary approach enables users to run extensive machine learning applications effortlessly, eliminating the complexity of managing multiple GPUs or TPUs.Cerebras serves a diverse clientele, including leading model labs, major global enterprises, and pioneering AI-native startups. Recently, OpenAI announced a multi-year partnership with Cerebras, aiming to deploy 750 megawatts of scale that will redefine key workloads with ultra-high-speed inference.Our groundbreaking wafer-scale architecture ensures that Cerebras Inference provides the fastest Generative AI inference solution globally, achieving speeds that are over ten times faster than GPU-based hyperscale cloud services. This significant enhancement in performance is transforming the user experience of AI applications, facilitating real-time iteration and boosting intelligence through enhanced computational capabilities.About The RoleWe are seeking a Senior Performance Analyst to join our dynamic Product team. As a specialist in state-of-the-art inference performance, you will be the go-to expert on how Cerebras measures up against alternative inference providers in terms of pricing and performance. This role combines performance benchmarking from foundational principles with competitive intelligence. The position revolves around two key pillars:Performance BenchmarkingYou will develop, execute, and sustain reproducible benchmarks that assess Cerebras inference performance for actual customer workloads. This includes metrics such as tokens per second, time to first token, latency under concurrency, and total cost of ownership (TCO).Competitive AnalysisYou will analyze market trends and competitor offerings to position Cerebras effectively within the inference landscape.

Apr 13, 2026
Apply
Cerebras Systems logo
Full-time|$175K/yr - $275K/yr|On-site|Sunnyvale, CA

Cerebras Systems is a pioneer in AI technology, renowned for creating the world’s largest AI chip, which is an astounding 56 times larger than traditional GPUs. Our innovative wafer-scale architecture provides unparalleled AI computing capabilities equivalent to dozens of GPUs on a single chip, while ensuring the programming simplicity of a single device. This unique approach enables Cerebras to achieve unmatched training and inference speeds, allowing machine learning practitioners to seamlessly execute large-scale ML applications without the complexity of managing vast arrays of GPUs or TPUs. Cerebras' impressive clientele includes leading model laboratories, global enterprises, and cutting-edge AI-focused startups. Recently, OpenAI announced a multi-year partnership with Cerebras, enhancing transformative workloads through ultra-high-speed inference utilizing 750 megawatts of scale. With our groundbreaking wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than GPU-based hyperscale cloud inference services. This significant speed enhancement is revolutionizing the user experience in AI applications, enabling real-time iterations and boosting intelligence through advanced computational capabilities.About The RoleAs the Lead RTL Design Engineer, you will play a pivotal role in our exceptional team responsible for designing and developing the next iterations of the Cerebras Wafer Scale Engine (WSE). This position demands extensive expertise in RTL design and integration, with a strong emphasis on delivering high-performance, power-efficient, and scalable solutions. Additionally, you will oversee collaboration with external ASIC vendors and work closely with design verification, physical design, software, and system teams to translate innovative semiconductor architectures from concept to production, addressing the unique challenges associated with building WSE systems.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale CA or Toronto Canada

Cerebras Systems is revolutionizing the AI industry by developing the world’s largest AI chip, 56 times the size of traditional GPUs. Our innovative wafer-scale architecture delivers the computational power of dozens of GPUs on a single chip, while simplifying programming to the ease of a single device. This unique approach enables Cerebras to achieve unmatched training and inference speeds, empowering machine learning professionals to seamlessly operate large-scale ML applications without the complexities of managing multiple GPUs or TPUs. Our esteemed clientele includes leading model labs, global corporations, and pioneering AI-native startups. Recently, OpenAI announced a multi-year partnership with Cerebras, aiming to leverage 750 megawatts of scale to transform critical workloads through ultra high-speed inference. With our groundbreaking wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than GPU-based hyperscale cloud inference services. This unprecedented speed enhances the user experience of AI applications, enabling real-time iterations and increased intelligence through advanced computation capabilities. About The RoleThe AI Infrastructure Operations Engineer (SiteOps) is an entry-level position focusing on the deployment, initialization, monitoring, and first-response troubleshooting of Cerebras AI infrastructure within data center settings. This role plays a critical part in supporting Cerebras systems, cluster server hardware, networking hardware, and monitoring tools.Your responsibilities will include ensuring the reliable operation and scalability of Cerebras AI clusters by executing established hardware initialization and validation protocols, monitoring telemetry data, performing initial troubleshooting, and escalating issues according to predefined workflows.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale CA or Toronto Canada

Cerebras Systems revolutionizes the AI landscape with the creation of the world’s largest AI chip, a remarkable 56 times larger than conventional GPUs. Our innovative wafer-scale architecture delivers the computational power of numerous GPUs on a single chip, simplifying programming efforts for users. This unique approach enables Cerebras to achieve unparalleled training and inference speeds, empowering machine learning practitioners to seamlessly execute large-scale ML applications without the complexities of managing hundreds of GPUs or TPUs.Our clientele includes leading model laboratories, global enterprises, and pioneering AI-native startups. Notably, OpenAI recently announced a multi-year partnership with Cerebras to deploy 750 megawatts of scale, significantly enhancing key workloads with ultra-high-speed inference.Thanks to our groundbreaking wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, exceeding the performance of GPU-based hyperscale cloud inference services by over ten times. This significant speed enhancement transforms the user experience of AI applications, facilitating real-time iterations and augmented intelligence through additional agentic computation.About The RoleWe are on the lookout for a highly skilled and experienced AI Infrastructure Operations Engineer to oversee and manage our state-of-the-art machine learning compute clusters. In this role, you will have the unique opportunity to work with the world’s largest computer chip, the Wafer-Scale Engine (WSE), and the systems that leverage its extraordinary power.You will play a pivotal role in ensuring the health, performance, and availability of our infrastructure, maximizing compute capacity, and supporting our expanding AI initiatives. This position requires an in-depth understanding of Linux-based systems, expertise in containerization technologies, and experience in monitoring and troubleshooting complex distributed systems. The ideal candidate is a proactive problem-solver with a strong background in large-scale compute infrastructure who is reliable and committed to customer success.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|$200K/yr - $240K/yr|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI innovation, creating the world's largest AI chip that is 56 times larger than conventional GPUs. Our unique wafer-scale architecture combines the computational power of numerous GPUs into a single chip, offering unparalleled programming simplicity. This allows us to deliver exceptional training and inference speeds, enabling machine learning practitioners to seamlessly execute large-scale ML applications without the complexities of managing multiple GPUs or TPUs.We are proud to serve a diverse clientele, including leading model laboratories, global corporations, and pioneering AI-native startups. Notably, OpenAI has recently partnered with Cerebras to leverage our technology, deploying 750 megawatts of scale to revolutionize key workloads through ultra-high-speed inference.With our cutting-edge wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, boasting speeds over 10 times quicker than GPU-based hyperscale cloud inference services. This remarkable enhancement transforms the user experience of AI applications, facilitating real-time iteration and amplifying intelligence through enhanced computational capabilities.Job SummaryThe Sourcing Manager for Critical Components will lead the development and execution of global sourcing strategies aimed at securing high-quality and cost-effective critical components and materials. This position is pivotal in ensuring supply chain continuity, minimizing risks, and fostering innovation through market analysis, supplier relationship management, and advanced negotiation strategies. The manager will collaborate with cross-functional teams to synchronize procurement efforts with organizational objectives, enhance procurement processes, and strengthen supplier partnerships.Key Responsibilities:Strategic Sourcing: Formulate and deploy comprehensive sourcing strategies for critical components that align with long-term business goals and maintain a competitive edge.Supplier Management: Cultivate and maintain robust relationships with key suppliers, conduct routine performance evaluations, and oversee contracts to ensure compliance with terms and mitigation of risks.Cost Optimization: Identify and implement cost-saving initiatives while ensuring quality standards are met.

May 4, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Join Cerebras Systems as a Senior WAN Network Engineer, where you will play a crucial role in designing and optimizing our wide area network infrastructure. You will work alongside a talented team to ensure high availability and performance of network services.

Mar 30, 2026
Apply
Taara logo
Full-time|$160K/yr - $210K/yr|On-site|Sunnyvale, CA

About the TeamAt Taara, we are dedicated to transforming the world of connectivity. Originating from X, Google's Moonshot Factory, our mission is to bridge the digital divide by delivering high-speed, affordable internet through innovative light-based technologies. Join us in revolutionizing wireless optical communication and photonics chip technologies as we expand our reach globally.About the RoleWe are seeking a Systems Software Test Engineer to spearhead the automation and validation efforts for our comprehensive systems. This role is pivotal, as it involves not just testing APIs, but also architecting frameworks to ensure our software effectively controls high-precision hardware in real-time. You will play a crucial role in connecting software development with hardware reliability, ensuring that each code commit enhances our network's performance in real-world scenarios.Your Impact:Automated Framework Design: Develop and implement scalable automation frameworks to rigorously test embedded software and cloud integrations.Hardware-in-the-Loop Testing: Create and uphold HIL test benches, enabling real-time software interactions with physical hardware units under simulated field conditions.CI/CD Integration: Lead the incorporation of automated tests into the CI/CD pipeline, establishing quality gates for every software build.Comprehensive Data Path Validation: Collaborate with Cloud and Embedded teams to validate the data flow from the physical optical link to the backend monitoring dashboard.Performance Regression Testing: Develop automated regression suites to identify performance regressions impacting network throughput, latency, and stability.

Feb 5, 2026
Apply
Comtech LLC logo
Contract|On-site|Sunnyvale

Job Title: System AdministratorLocation: Sunnyvale, CAKey Responsibilities:Manage and administer Active Directory and Windows infrastructure.Oversee Office 365 and Exchange email administration.Diagnose and resolve issues related to server hardware and operating systems.Install, configure, and maintain server infrastructure and associated equipment.Support and maintain Data Center operations.Essential Skills and Experience:Profound knowledge of Active Directory and Windows Infrastructure.Proven experience in building Active Directory Domains and Forests.Experience upgrading Active Directory Domains (from 2008 to 2012).Extensive implementation experience with Windows Infrastructure components such as ADFS, WSUS, CA, NPS, DNS, and DHCP, with CA and NPS experience considered a plus.Minimum of 5 years of experience with Windows Server Operating Systems.At least 3 years of experience with VMware vSphere.Proficient in PowerShell scripting.Strong understanding of DNS, DHCP, IIS, Group Policy, and WSUS.Hands-on experience with Dell Server Hardware, preferably Dell Blades, including DRAC and OpenManage.

Sep 7, 2017
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Join our innovative team at Cerebras Systems as a Manufacturing Linux Network Engineer. In this role, you will be responsible for developing, maintaining, and optimizing our Linux network systems to support cutting-edge manufacturing processes. You will collaborate with cross-functional teams to ensure seamless operations and efficient network performance.

May 1, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale CA or Toronto Canada

Cerebras Systems is at the forefront of AI technology, creating the largest AI chip in the world, which is 56 times larger than traditional GPUs. Our innovative wafer-scale architecture provides AI compute power equivalent to dozens of GPUs on a single chip, while ensuring the programming simplicity of a single device. This unique approach enables Cerebras to achieve industry-leading training and inference speeds, empowering machine learning practitioners to run extensive ML applications without the complexities of managing multiple GPUs or TPUs.Our clientele includes leading model labs, global corporations, and pioneering AI-native startups. Recently, OpenAI announced a multi-year collaboration with Cerebras to utilize 750 megawatts of scale, revolutionizing important workloads with ultra-high-speed inference.With our groundbreaking wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, outperforming GPU-based hyperscale cloud inference services by over tenfold. This remarkable speed enhancement is reshaping the user experience of AI applications, facilitating real-time iterations and amplifying intelligence through enhanced agentic computation.About The RoleAs a Compute / Server Platform Architect within the Cluster Architecture Team, you will be responsible for the server-side platform architecture that empowers Cerebras CS3-based AI clusters (for both training and inference), ensuring predictable performance, scalability, and reliability. Our accelerators are network-attached, making the x86 server fleet an integral component of the end-to-end system. This system supports critical runtime functions such as orchestration, prompt caching, and IO/control services, necessitating co-design with software to optimize token-level latency, throughput, and cost efficiency. You will translate workload behaviors into requirements for CPU, memory, IO, PCIe, and host networking, lead platform evaluations with vendors, and provide technical direction through qualification and production adoption in close collaboration with other leaders and technical project managers.

Feb 18, 2026
Apply
Applied Intuition, Inc. logo
Full-time|$197.4K/yr - $292.4K/yr|On-site|Sunnyvale, California, United States

About Applied IntuitionApplied Intuition, Inc. is at the forefront of the physical AI revolution. Established in 2017 and currently valued at $15 billion, this dynamic Silicon Valley firm is dedicated to developing the digital frameworks essential to infuse intelligence into every moving machine globally. Serving major sectors such as automotive, defense, trucking, construction, mining, and agriculture, Applied Intuition focuses on three primary areas: advanced tools and infrastructure, robust operating systems, and cutting-edge autonomy solutions. Trusted by 18 of the top 20 global automakers, as well as the United States military and its partners, our innovative solutions are designed to deliver unparalleled physical intelligence. Headquartered in Sunnyvale, California, we also have offices strategically located in Washington, D.C.; San Diego; Ft. Walton Beach, Florida; Ann Arbor, Michigan; London; Stuttgart; Munich; Stockholm; Bangalore; Seoul; and Tokyo. Discover more at applied.co.We pride ourselves on being an in-office company, expecting our employees to work primarily from their Applied Intuition office five days a week. We also value flexibility, allowing for responsible management of schedules, which may include occasional remote work, starting the day with morning meetings from home, or leaving early when family commitments arise.About the RoleAs a Security Architect at Applied Intuition, you will spearhead the design and implementation of cybersecurity architectures tailored for next-generation automotive systems. You will ensure adherence to ISO/SAE 21434 cybersecurity engineering standards and UN Regulations 155/156 requirements. Collaborating closely with embedded and application security engineers, you will establish comprehensive security controls encompassing silicon hardware, embedded systems, POSIX systems, networks, and cloud infrastructure for automotive platforms. This role demands extensive technical knowledge in automotive cybersecurity frameworks, hands-on experience with secure development lifecycle (SDL) processes, and the capability to translate regulatory requirements into actionable security architectures.Your Responsibilities:Develop cybersecurity architectures that comply with ISO/SAE 21434 engineering requirements and UN R155 Cybersecurity Management System (CSMS) standards.

Jan 20, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale CA or Toronto Canada

Cerebras Systems is pioneering the field of artificial intelligence with the development of the world's largest AI chip, which is 56 times larger than traditional GPUs. Our innovative wafer-scale architecture delivers the computational power equivalent to dozens of GPUs on a single chip, simplifying programming to a single device. This breakthrough allows Cerebras to achieve unparalleled training and inference speeds, enabling machine learning practitioners to seamlessly run extensive ML applications without the complexity of managing numerous GPUs or TPUs. Our clientele includes leading model labs, global corporations, and cutting-edge AI-native startups. Cerebras recently formed a transformative multi-year partnership with OpenAI, focusing on deploying 750 megawatts of scale to enhance critical workloads through ultra-fast inference. Thanks to our unique wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, outperforming GPU-based hyperscale cloud services by over ten times. This dramatic increase in speed is revolutionizing the user experience of AI applications, facilitating real-time iterations and enhancing intelligence through additional agentic computation. As an Infrastructure Hardware Technical Program Manager (Server and Network Systems) within the Cluster Architecture Team, you will oversee the comprehensive delivery of server and network platform programs across Cerebras CS-3-based AI clusters. Your responsibilities will range from requirements gathering and vendor selection to lab bring-up, qualification, and production rollout. You will act as the execution lead for multi-team programs involving OEM/ODM partners, component vendors, internal software/runtime teams, architects, validation/QA, and deployment/operations.This position requires a strong technical background; you should grasp server, network, and system-level trade-offs to effectively conduct technical reviews, keep programs aligned with real-world constraints, and maintain clear decision documentation. Collaborating closely with Compute, Server, and Network Platform Architects, you will ensure detailed technical direction and approval. Additionally, you will work to establish mutual understanding with our rack/elevations and physical data center design partners to ensure server and network modifications are implemented smoothly in real deployments (without directly managing physical data center design).

Feb 25, 2026
Apply
Kaseya logo
Full-time|On-site|Sunnyvale, CA

Kaseya seeks an Executive Assistant in Sunnyvale, CA to support its executive team. This position plays a key role in handling administrative work that helps daily operations run efficiently. Key responsibilities Oversee complex calendars and schedules for executives Arrange meetings, manage logistics, and prepare relevant materials Act as a communication link between executives and other teams Assist with coordinating tasks and projects as needed What we look for Strong organizational skills Ability to anticipate needs and show initiative Comfort with shifting priorities Experience supporting executives or senior leaders is considered a plus This role is located on site in Sunnyvale, CA.

Apr 24, 2026

Sign in to browse more jobs

Create account — see all 968 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.