About the job
About Voltai
Voltai is at the forefront of developing advanced world models and intelligent agents designed to learn, evaluate, plan, experiment, and interact with the physical environment. Our initial focus lies in understanding and innovating hardware, specifically in electronics systems and semiconductors, where artificial intelligence can surpass human cognitive capabilities.
About Our Team
Our team is backed by prominent investors from Silicon Valley, Stanford University, and high-profile leaders from companies like Google, AMD, Broadcom, and Marvell. We comprise former Stanford faculty, researchers from SAIL, medalists from international Olympiads, and executives from leading tech firms, including Synopsys and GlobalFoundries, alongside notable figures from the U. S. government.
Key Responsibilities
Develop and optimize MPI+CUDA PDE solvers for electrostatics, charge transport, and electromagnetic field challenges on intricate 3D IC geometries utilizing multi-node GPU clusters.
Enhance and extend AMG preconditioners, Krylov solvers, and mesh pipelines, ensuring performance and correctness at scale.
Construct and train neural operators (FNO, DeepONet, GNO, and variants) as high-fidelity surrogates for PDE-based field solvers.
Design simulation pipelines that yield training data for neural operator models, addressing sampling strategies, mesh management, and physical consistency checks.
Conduct thorough validations, including analytical solutions, published benchmarks, and cross-validation between field solvers and learned surrogates.
Qualifications
PhD in Computational Physics, Applied Mathematics, Computational Engineering, or a closely related discipline.
Extensive knowledge of numerical PDE methods: FEM, FVM, or BEM, including weak formulations, quadrature, convergence, and error analysis.
Proficient in C++ and CUDA, with a focus on writing and optimizing kernels, memory hierarchy management, and multi-GPU programming.
Experience with multi-node HPC: MPI, domain decomposition, collective communication, and scaling strategies.
Deep understanding of sparse linear algebra, including Krylov methods, algebraic multigrid, and preconditioning techniques.
Hands-on experience with neural operators (FNO, DeepONet, or similar), encompassing training, architecture design, and evaluation on PDE datasets.
Solid grasp of AI applications in scientific research.
