Senior Mechanical Engineer At Cerebras Systems Sunnyvale Ca jobs in Sunnyvale – Browse 1,131 openings on RoboApply Jobs

Senior Mechanical Engineer At Cerebras Systems Sunnyvale Ca jobs in Sunnyvale

Open roles matching “Senior Mechanical Engineer At Cerebras Systems Sunnyvale Ca” with location signals for Sunnyvale. 1,131 active listings on RoboApply Jobs.

1,131 jobs found

1 - 20 of 1,131 Jobs
Apply
Cerebras Systems logo
Full-time|$190K/yr - $230K/yr|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI innovation, creating the world's largest AI chip, which is 56 times larger than traditional GPUs. Our pioneering wafer-scale architecture delivers exceptional AI computational power equivalent to dozens of GPUs on a single chip, offering users unparalleled simplicity and efficiency. This unique approach enables us t…

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|$150K/yr - $260K/yr|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI technology, engineering the world’s largest AI chip, which is 56 times larger than conventional GPUs. Our innovative wafer-scale architecture enables unprecedented AI computational power, equivalent to dozens of GPUs operating as a single unit, thereby simplifying programming for machine learning tasks. This revolutionary approach not only provides unmatched training and inference speeds but also allows users to execute large-scale machine learning applications without the complexity of managing multiple GPUs or TPUs.Our esteemed clientele includes leading model laboratories, global corporations, and pioneering AI startups. Recently, OpenAI announced a multi-year collaboration with Cerebras, deploying 750 megawatts of processing capacity that revolutionizes critical workloads through ultra-fast inference.With our groundbreaking wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, outperforming GPU-based hyperscale cloud services by over 10 times. This significant increase in speed is reshaping the user experience of AI applications, facilitating real-time iterations and enhancing intelligence through advanced computational capabilities.

Feb 19, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Cerebras Systems is pioneering the realm of artificial intelligence with the world’s largest AI chip, boasting a size 56 times greater than traditional GPUs. Our innovative wafer-scale architecture delivers the computational power of dozens of GPUs on a single chip while maintaining the programming simplicity of a single device. This unique approach enables us to achieve unparalleled training and inference speeds, allowing machine learning professionals to seamlessly execute large-scale ML applications without the complexities of managing numerous GPUs or TPUs.Our clientele includes leading model laboratories, global enterprises, and avant-garde AI-native startups. Recently, OpenAI formed a multi-year partnership with Cerebras to harness 750 megawatts of scale, revolutionizing essential workloads with ultra high-speed inference.Thanks to our groundbreaking wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, exceeding GPU-based hyperscale cloud inference services by over ten times. This significant enhancement in speed is transforming the user experience of AI applications, facilitating real-time iteration and augmenting intelligence via enhanced agentic computation.About The RoleWe are seeking a dynamic Head of IT to establish and manage the internal technology infrastructure of a rapidly scaling organization operating at the forefront of AI hardware and software. This is not a conventional IT leadership position; it is a build-and-scale opportunity for someone who thrives in a dynamic environment.You will oversee the systems that Cerebras employees, contractors, and executives depend on daily, including laptops, identity management, SaaS, networking, collaboration tools, endpoint security, internal support, and essential IT controls necessary for a company of our maturity. You will ensure that our highly technical and fast-paced engineering workforce remains unimpeded while simultaneously fortifying the environment to meet the standards expected of a company at our stage, including SOX-grade ITGCs and SOC 2 compliance.

Apr 9, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI technology, creating the world's largest AI chip—56 times the size of traditional GPUs. Our innovative wafer-scale architecture delivers the computational power of dozens of GPUs on a single chip while simplifying programming to the ease of a single device. This groundbreaking approach enables us to achieve unparalleled training and inference speeds, empowering machine learning professionals to seamlessly execute large-scale ML applications without the complexities of managing numerous GPUs or TPUs.Cerebras serves an impressive clientele that includes top model laboratories, multinational corporations, and pioneering AI-native startups. Recently, OpenAI announced a multi-year partnership with Cerebras, deploying 750 megawatts of scale to revolutionize critical workloads with ultra-high-speed inference.Our wafer-scale architecture also powers the fastest Generative AI inference solution globally, exceeding GPU-based hyperscale cloud inference services by over 10 times. This significant enhancement in speed is transforming the user experience of AI applications, facilitating real-time iteration and boosting intelligence through additional computational capabilities.About The RoleWe are in search of a talented Compiler Engineer to contribute to the design and implementation of new features within our CSL (Cerebras Software Language) compiler. CSL is a Zig-like programming language utilized both internally and externally to program our wafer-scale engine (WSE).The language offers high-level abstractions to simplify programming the wafer WSE while providing low-level access to hardware internals for optimal hardware utilization. The compiler leverages MLIR infrastructure to translate CSL into LLVM IR, which is further compiled by a dedicated LLVM mid-end/backend into executable files.ResponsibilitiesDesign and implement front-end language features, semantic analysis, intermediate representations, and lowering pipelines from CSL to MLIR dialect(s) and LLVM IR.Develop and enhance abstraction layers between the CSL language and the underlying hardware.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Cerebras Systems is pioneering the future of artificial intelligence with the development of the world's largest AI chip, which is an astonishing 56 times bigger than conventional GPUs. Our innovative wafer-scale architecture enables the AI computational power equivalent to dozens of GPUs on a single chip, while maintaining programming simplicity akin to that of a single device. This state-of-the-art approach allows us to deliver unparalleled training and inference speeds, enabling machine learning practitioners to seamlessly execute large-scale ML applications without the complexities of managing multiple GPUs or TPUs.We proudly serve a diverse clientele, including leading model labs, multinational corporations, and innovative AI-native startups. Notably, OpenAI recently announced a multi-year partnership with Cerebras to harness 750 megawatts of scale, revolutionizing critical workloads with ultra-high-speed inference.With our groundbreaking wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, achieving speeds over ten times faster than GPU-based hyperscale cloud inference services. This remarkable enhancement in speed is transforming the user experience for AI applications, unlocking real-time iteration and enriching intelligence through enhanced computational capabilities.About The RoleAs a Network Architect on the Cluster Architecture Team, you will collaborate closely with vendors, internal networking teams, and industry experts to create top-tier interconnect architecture for both current and future generations of Cerebras AI clusters. Your responsibilities will include developing proof-of-concept designs for new network features that promote a resilient and reliable network tailored for AI workloads. This role demands cross-functional collaboration and engagement with a variety of hardware components, including network devices and the Wafer-Scale Engine, as well as software across multiple layers of the stack, from host-side networking to cluster-level coordination. A strong understanding of network monitoring systems and debugging methodologies is essential.ResponsibilitiesDesign AI/ML and HPC Clusters.Identify and mitigate performance or efficiency bottlenecks, ensuring optimal resource utilization, low latency, and high-throughput communication.Lead technical projects involving multiple teams and diverse software and hardware components to realize advanced network solutions.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|$150K/yr - $270K/yr|On-site|Sunnyvale, CA

Cerebras Systems is revolutionizing the world of artificial intelligence with our groundbreaking wafer-scale architecture, which is 56 times larger than traditional GPUs. Our innovative design provides unparalleled AI compute power, allowing users to run extensive machine learning applications effortlessly, without the complexities of managing multiple GPUs or TPUs.Our esteemed clientele includes leading model labs, major global enterprises, and pioneering AI startups. Recently, OpenAI announced a multi-year partnership with Cerebras to deploy 750 megawatts of transformative computing power, enabling ultra-fast inference for critical workloads.With our advanced wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than GPU-based hyperscale cloud services. This leap in performance is redefining the user experience for AI applications, facilitating real-time iteration and enhancing intelligence through additional agentic computation.About The RoleJoin our dedicated physical design team as a 3D Physical Design Engineer, where you will focus on the design and analysis of 3D integrated products. This role requires a blend of traditional ASIC/SoC physical design expertise, along with skills in packaging, power management, clock distribution, and thermal analysis. Collaborating closely with the architecture and RTL teams, you will contribute to research and development efforts on innovative concepts for 3D integration.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI innovation, creating the world’s largest AI chip, which is 56 times the size of conventional GPUs. Our unique wafer-scale architecture not only provides the power equivalent to dozens of GPUs on a single chip but does so with the simplicity of programming a single device. This cutting-edge approach allows us to achieve unparalleled training and inference speeds, enabling machine learning professionals to seamlessly execute large-scale ML applications without the complexity of managing multiple GPUs or TPUs.We proudly serve a diverse clientele, including leading model labs, renowned global enterprises, and pioneering AI-native startups. Notably, OpenAI has recently announced a multi-year partnership with us to leverage our technology in deploying 750 megawatts of scale, revolutionizing key workloads with ultra-fast inference capabilities.Our groundbreaking wafer-scale architecture empowers Cerebras Inference to deliver the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than traditional GPU-based hyperscale cloud inference services. This significant leap in speed redefines the user experience of AI applications, facilitating real-time iteration and enhancing intelligence through advanced computation.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI innovation, creating the world's largest AI chip that is 56 times larger than traditional GPUs. Our unique wafer-scale architecture delivers the computational power of numerous GPUs on a single chip, simplifying programming while providing unparalleled training and inference speeds. This revolutionary approach enables users to run extensive machine learning applications effortlessly, eliminating the complexity of managing multiple GPUs or TPUs.Cerebras serves a diverse clientele, including leading model labs, major global enterprises, and pioneering AI-native startups. Recently, OpenAI announced a multi-year partnership with Cerebras, aiming to deploy 750 megawatts of scale that will redefine key workloads with ultra-high-speed inference.Our groundbreaking wafer-scale architecture ensures that Cerebras Inference provides the fastest Generative AI inference solution globally, achieving speeds that are over ten times faster than GPU-based hyperscale cloud services. This significant enhancement in performance is transforming the user experience of AI applications, facilitating real-time iteration and boosting intelligence through enhanced computational capabilities.About The RoleWe are seeking a Senior Performance Analyst to join our dynamic Product team. As a specialist in state-of-the-art inference performance, you will be the go-to expert on how Cerebras measures up against alternative inference providers in terms of pricing and performance. This role combines performance benchmarking from foundational principles with competitive intelligence. The position revolves around two key pillars:Performance BenchmarkingYou will develop, execute, and sustain reproducible benchmarks that assess Cerebras inference performance for actual customer workloads. This includes metrics such as tokens per second, time to first token, latency under concurrency, and total cost of ownership (TCO).Competitive AnalysisYou will analyze market trends and competitor offerings to position Cerebras effectively within the inference landscape.

Apr 13, 2026
Apply
Cerebras Systems logo
Full-time|$175K/yr - $275K/yr|On-site|Sunnyvale, CA

Cerebras Systems is a pioneer in AI technology, renowned for creating the world’s largest AI chip, which is an astounding 56 times larger than traditional GPUs. Our innovative wafer-scale architecture provides unparalleled AI computing capabilities equivalent to dozens of GPUs on a single chip, while ensuring the programming simplicity of a single device. This unique approach enables Cerebras to achieve unmatched training and inference speeds, allowing machine learning practitioners to seamlessly execute large-scale ML applications without the complexity of managing vast arrays of GPUs or TPUs. Cerebras' impressive clientele includes leading model laboratories, global enterprises, and cutting-edge AI-focused startups. Recently, OpenAI announced a multi-year partnership with Cerebras, enhancing transformative workloads through ultra-high-speed inference utilizing 750 megawatts of scale. With our groundbreaking wafer-scale architecture, Cerebras Inference delivers the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than GPU-based hyperscale cloud inference services. This significant speed enhancement is revolutionizing the user experience in AI applications, enabling real-time iterations and boosting intelligence through advanced computational capabilities.About The RoleAs the Lead RTL Design Engineer, you will play a pivotal role in our exceptional team responsible for designing and developing the next iterations of the Cerebras Wafer Scale Engine (WSE). This position demands extensive expertise in RTL design and integration, with a strong emphasis on delivering high-performance, power-efficient, and scalable solutions. Additionally, you will oversee collaboration with external ASIC vendors and work closely with design verification, physical design, software, and system teams to translate innovative semiconductor architectures from concept to production, addressing the unique challenges associated with building WSE systems.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale CA or Toronto Canada

Cerebras Systems is revolutionizing the AI industry by developing the world’s largest AI chip, 56 times the size of traditional GPUs. Our innovative wafer-scale architecture delivers the computational power of dozens of GPUs on a single chip, while simplifying programming to the ease of a single device. This unique approach enables Cerebras to achieve unmatched training and inference speeds, empowering machine learning professionals to seamlessly operate large-scale ML applications without the complexities of managing multiple GPUs or TPUs. Our esteemed clientele includes leading model labs, global corporations, and pioneering AI-native startups. Recently, OpenAI announced a multi-year partnership with Cerebras, aiming to leverage 750 megawatts of scale to transform critical workloads through ultra high-speed inference. With our groundbreaking wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, achieving speeds over 10 times faster than GPU-based hyperscale cloud inference services. This unprecedented speed enhances the user experience of AI applications, enabling real-time iterations and increased intelligence through advanced computation capabilities. About The RoleThe AI Infrastructure Operations Engineer (SiteOps) is an entry-level position focusing on the deployment, initialization, monitoring, and first-response troubleshooting of Cerebras AI infrastructure within data center settings. This role plays a critical part in supporting Cerebras systems, cluster server hardware, networking hardware, and monitoring tools.Your responsibilities will include ensuring the reliable operation and scalability of Cerebras AI clusters by executing established hardware initialization and validation protocols, monitoring telemetry data, performing initial troubleshooting, and escalating issues according to predefined workflows.

Feb 17, 2026
Apply
Cerebras Systems logo
Full-time|On-site|Sunnyvale CA or Toronto Canada

Cerebras Systems revolutionizes the AI landscape with the creation of the world’s largest AI chip, a remarkable 56 times larger than conventional GPUs. Our innovative wafer-scale architecture delivers the computational power of numerous GPUs on a single chip, simplifying programming efforts for users. This unique approach enables Cerebras to achieve unparalleled training and inference speeds, empowering machine learning practitioners to seamlessly execute large-scale ML applications without the complexities of managing hundreds of GPUs or TPUs.Our clientele includes leading model laboratories, global enterprises, and pioneering AI-native startups. Notably, OpenAI recently announced a multi-year partnership with Cerebras to deploy 750 megawatts of scale, significantly enhancing key workloads with ultra-high-speed inference.Thanks to our groundbreaking wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, exceeding the performance of GPU-based hyperscale cloud inference services by over ten times. This significant speed enhancement transforms the user experience of AI applications, facilitating real-time iterations and augmented intelligence through additional agentic computation.About The RoleWe are on the lookout for a highly skilled and experienced AI Infrastructure Operations Engineer to oversee and manage our state-of-the-art machine learning compute clusters. In this role, you will have the unique opportunity to work with the world’s largest computer chip, the Wafer-Scale Engine (WSE), and the systems that leverage its extraordinary power.You will play a pivotal role in ensuring the health, performance, and availability of our infrastructure, maximizing compute capacity, and supporting our expanding AI initiatives. This position requires an in-depth understanding of Linux-based systems, expertise in containerization technologies, and experience in monitoring and troubleshooting complex distributed systems. The ideal candidate is a proactive problem-solver with a strong background in large-scale compute infrastructure who is reliable and committed to customer success.

Feb 17, 2026
Apply
Taara logo
Full-time|$160K/yr - $210K/yr|On-site|Sunnyvale, CA

About the TeamAt Taara, we are dedicated to transforming the world of connectivity. Originating from X, Google's Moonshot Factory, our mission is to bridge the digital divide by delivering high-speed, affordable internet through innovative light-based technologies. Join us in revolutionizing wireless optical communication and photonics chip technologies as we expand our reach globally.About the RoleWe are seeking a Systems Software Test Engineer to spearhead the automation and validation efforts for our comprehensive systems. This role is pivotal, as it involves not just testing APIs, but also architecting frameworks to ensure our software effectively controls high-precision hardware in real-time. You will play a crucial role in connecting software development with hardware reliability, ensuring that each code commit enhances our network's performance in real-world scenarios.Your Impact:Automated Framework Design: Develop and implement scalable automation frameworks to rigorously test embedded software and cloud integrations.Hardware-in-the-Loop Testing: Create and uphold HIL test benches, enabling real-time software interactions with physical hardware units under simulated field conditions.CI/CD Integration: Lead the incorporation of automated tests into the CI/CD pipeline, establishing quality gates for every software build.Comprehensive Data Path Validation: Collaborate with Cloud and Embedded teams to validate the data flow from the physical optical link to the backend monitoring dashboard.Performance Regression Testing: Develop automated regression suites to identify performance regressions impacting network throughput, latency, and stability.

Feb 5, 2026
Apply
Cerebras Systems logo
Full-time|$200K/yr - $240K/yr|On-site|Sunnyvale, CA

Cerebras Systems is at the forefront of AI innovation, creating the world's largest AI chip that is 56 times larger than conventional GPUs. Our unique wafer-scale architecture combines the computational power of numerous GPUs into a single chip, offering unparalleled programming simplicity. This allows us to deliver exceptional training and inference speeds, enabling machine learning practitioners to seamlessly execute large-scale ML applications without the complexities of managing multiple GPUs or TPUs.We are proud to serve a diverse clientele, including leading model laboratories, global corporations, and pioneering AI-native startups. Notably, OpenAI has recently partnered with Cerebras to leverage our technology, deploying 750 megawatts of scale to revolutionize key workloads through ultra-high-speed inference.With our cutting-edge wafer-scale architecture, Cerebras Inference provides the fastest Generative AI inference solution globally, boasting speeds over 10 times quicker than GPU-based hyperscale cloud inference services. This remarkable enhancement transforms the user experience of AI applications, facilitating real-time iteration and amplifying intelligence through enhanced computational capabilities.Job SummaryThe Sourcing Manager for Critical Components will lead the development and execution of global sourcing strategies aimed at securing high-quality and cost-effective critical components and materials. This position is pivotal in ensuring supply chain continuity, minimizing risks, and fostering innovation through market analysis, supplier relationship management, and advanced negotiation strategies. The manager will collaborate with cross-functional teams to synchronize procurement efforts with organizational objectives, enhance procurement processes, and strengthen supplier partnerships.Key Responsibilities:Strategic Sourcing: Formulate and deploy comprehensive sourcing strategies for critical components that align with long-term business goals and maintain a competitive edge.Supplier Management: Cultivate and maintain robust relationships with key suppliers, conduct routine performance evaluations, and oversee contracts to ensure compliance with terms and mitigation of risks.Cost Optimization: Identify and implement cost-saving initiatives while ensuring quality standards are met.

May 4, 2026
Apply
Ceribell logo
Full-time|On-site|Sunnyvale

Role overview The Staff Mechanical Engineer at Ceribell plays a key part in shaping and advancing the mechanical systems behind the company’s technology. This position involves hands-on design and development work, as well as ongoing improvement of existing systems. The role requires both creative problem-solving and a strong technical foundation, working alongside a team of experienced professionals in Sunnyvale. What you will do Design mechanical components and systems that are integral to Ceribell’s products Develop and refine mechanical solutions to align with project objectives Work closely with engineers and other team members to address technical challenges Apply mechanical expertise to support innovation across projects Location This role is based in Sunnyvale.

Apr 22, 2026
Apply
Intuitive Surgical, Inc. logo
Full-time|On-site|Sunnyvale

Join Intuitive Surgical, a pioneer in robotic-assisted surgery, as a Mechanical Engineer. In this role, you'll design and develop innovative mechanical systems that are integral to our cutting-edge surgical technology. Collaborate with a team of talented engineers and contribute to the advancement of minimally invasive surgery, enhancing patients' lives worldwide.

May 1, 2026
Apply
TaaraConnect logo
Full-time|$160K/yr - $210K/yr|On-site|Sunnyvale, CA

About Our Team:At Taara, a visionary initiative from X, Google's Moonshot Factory, we are dedicated to connecting billions who currently lack reliable and affordable internet access. Our innovative approach harnesses the power of light to provide faster, more cost-effective, and dependable connectivity solutions. Join us in revolutionizing the future of wireless optical communication and photonics chip technology as we strive to bridge the digital divide for communities around the globe.Role Overview:We are in search of a talented Linux System Software Engineer to pioneer the development of the next-generation operating system for wireless broadband networks.The successful candidate will bring extensive experience in building system-level software within a Linux environment, adept in multi-threaded programming, state-machine and event-driven programming, and familiar with network monitoring protocols (such as gRPC, SNMP, OpenTelemetry). Proficiency in integrating with cloud-based back-end systems, along with strong skills in data structures, algorithms, and programming languages including Golang, Python, and C/C++ for development and unit/regression testing will be crucial for this role.Your Impact:Design and implement robust system-level software applications for IoT/network devices.Collaborate with cloud/backend engineers to develop visualization and troubleshooting tools.Stay informed about emerging IoT device trends and technologies.Work closely with engineers and operators to establish telemetry and monitoring solutions.Diagnose and resolve challenges encountered during large-scale field deployments.Qualifications:Bachelor's degree in Computer Science, Computer Networking, Electrical Engineering, or a related discipline.5+ years of experience in application development on Linux-based operating systems, particularly within telecommunications, ISP, or networking sectors.Proficiency in programming languages Golang, Python, and C/C++ for both development and testing.Familiarity with streaming and monitoring protocols and strategies for service provider networks (e.g., gRPC, OpenTelemetry, SNMP).Experience with multi-threaded programming techniques, state-machine design, and event-driven programming.

Dec 18, 2025
Apply
Intuitive logo
Full-time|On-site|Sunnyvale

Intuitive is hiring a Manufacturing Mechanical Engineer in Sunnyvale. This position focuses on developing and improving manufacturing processes for medical devices. The work supports both product quality and production efficiency. Role overview The Manufacturing Mechanical Engineer will contribute to process development and optimization within the production team. Collaboration with colleagues from other disciplines is a key part of this role. Key responsibilities Support the development and refinement of manufacturing processes for medical devices Work to maintain and improve product quality and production efficiency Collaborate with cross-functional teams throughout the production cycle Location This role is based in Sunnyvale.

Apr 28, 2026
Apply
DigitalFish logo
Full-time|On-site|Sunnyvale, CA

At DigitalFish, we are committed to empowering our clients by delivering cutting-edge technologies that revolutionize digital media creation and consumption for millions of users.We collaborate with top-tier digital media organizations, positioning ourselves at the forefront of their initiatives to develop innovative platforms and immersive experiences. Our esteemed clients include industry giants such as Apple, Google, Meta, Disney, DreamWorks, Activision, Technicolor, ESPN, LEGO, NASA, and many more.YOUR ROLEAs a vital member of our agile team, you will be instrumental in advancing imaging technology for camera capture and crafting augmented reality experiences that enrich human perception. Your projects will involve camera color processing, enhancing features related to human color perception, and utilizing tools like Unity and Blender for object rendering.

Oct 10, 2025
Apply
Comtech LLC logo
Contract|On-site|Sunnyvale

Job Title: System AdministratorLocation: Sunnyvale, CAKey Responsibilities:Manage and administer Active Directory and Windows infrastructure.Oversee Office 365 and Exchange email administration.Diagnose and resolve issues related to server hardware and operating systems.Install, configure, and maintain server infrastructure and associated equipment.Support and maintain Data Center operations.Essential Skills and Experience:Profound knowledge of Active Directory and Windows Infrastructure.Proven experience in building Active Directory Domains and Forests.Experience upgrading Active Directory Domains (from 2008 to 2012).Extensive implementation experience with Windows Infrastructure components such as ADFS, WSUS, CA, NPS, DNS, and DHCP, with CA and NPS experience considered a plus.Minimum of 5 years of experience with Windows Server Operating Systems.At least 3 years of experience with VMware vSphere.Proficient in PowerShell scripting.Strong understanding of DNS, DHCP, IIS, Group Policy, and WSUS.Hands-on experience with Dell Server Hardware, preferably Dell Blades, including DRAC and OpenManage.

Sep 7, 2017
Apply
Intuitive Surgical, Inc. logo
Senior Mechanical Engineer

Intuitive Surgical, Inc.

Full-time|On-site|Sunnyvale

Join our innovative team at Intuitive Surgical, Inc. as a Senior Mechanical Engineer, where your expertise will drive the development of cutting-edge robotic surgical systems. You will be instrumental in designing, analyzing, and testing mechanical components that enhance patient outcomes and surgical precision.

Mar 24, 2026

Sign in to browse more jobs

Create account — see all 1,131 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.