About the job
Prime Intellect develops an open superintelligence framework, supporting advanced agentic models and the infrastructure needed to create, train, and deploy them. The company’s mission is to unify global computational resources under a single control plane, integrating a full reinforcement learning (RL) post-training stack. The platform includes secure sandboxes, verifiable evaluations, environments, and an asynchronous RL trainer. Researchers, startups, and enterprises use Prime Intellect to run end-to-end RL at scale, adapting models for practical deployment.
Prime Intellect has raised $20 million in funding, including a recent $15 million round. Investors include Founders Fund, Menlo Ventures, and individuals such as Andrej Karpathy, Tri Dao, Dylan Patel, Clem Delangue, and Emad Mostaque.
Role overview
The Head of Compute leads all aspects of GPU resource management at Prime Intellect from the San Francisco office. This function covers sourcing, economics, contracting, and the strategic direction for compute resources, critical for model training, serving, and sales. Compute is both the company’s core product and the main constraint in the open AI ecosystem. The role exists to keep Prime Intellect and the broader open ecosystem competitive in a landscape where every major lab contends for the same GPUs.
What you will do
- Direct sourcing and procurement of GPU resources for model training and serving
- Manage compute economics and contracts, balancing long-term commitments with spot market activity
- Shape Prime Intellect’s strategic position in the global compute market
- Identify and prioritize key geographic compute hubs and hardware generations for broad access
- Collaborate with research and engineering teams to design the compute layer for the open model ecosystem
- Build and maintain commercial relationships with neocloud providers and industry partners
- Secure early access to new accelerator hardware and develop the operational framework for sustained compute advantage
- Decide what to train, where, and under which cost structures
What success looks like
- Modeling unit economics for multi-year GPU commitments as the market evolves
- Turning research needs into actionable compute strategies
- Negotiating significant contracts for reserved resources
- Working with neocloud leaders and internal teams to advance open post-training
Location
This position is based in San Francisco.
