About the job
About Granica
Granica is a pioneering AI research and infrastructure firm dedicated to developing dependable and steerable representations for enterprise data.
We cultivate trust through our innovative product, Crunch, which serves as a policy-driven health layer ensuring that extensive tabular datasets remain efficient, reliable, and reversible. Building upon this foundation, we are focused on creating Large Tabular Models—advanced systems designed to learn cross-column and relational structures, delivering trustworthy answers and automation with built-in provenance and governance.
The Role
As a member of the Applied AI Research Team, you will play a critical role in our mission. Your contributions will transform theoretical insights from fundamental research into practical algorithms, optimized pipelines, and production-ready systems capable of managing petabytes of structured enterprise data.
This position demands a high level of ownership from engineers who possess the ability to think like researchers while building like systems experts. You will convert theory into quantifiable performance enhancements and help establish the groundwork for structured AI.
What You’ll Do
Transform research into practical systems
Convert foundational concepts from Granica Research and Prof. Andrea Montanari’s group into scalable algorithms and experimental prototypes.
Develop evaluation harnesses, metrics, and datasets that accurately reflect the true signals from research concepts.
Establish and refine metrics that will gauge progress in structured AI.
Innovate and enhance algorithms for structured AI
Create efficient learning methods tailored for relational, tabular, graph, and enterprise data.
Prototype representation learning architectures and compression-aware models for large-scale structured information.
Construct high-performance learning pipelines
Execute rapid training and inference loops utilizing PyTorch, JAX, or custom kernels.
Optimize memory, computation, and data movement pathways with an emphasis on cost-effectiveness, latency, and throughput.
Integrate symbolic, relational, and neural components

