About the job
Join our innovative team at Nace.ai as we push the boundaries of artificial intelligence through cutting-edge research in large language models (LLMs) and vision-language models (VLMs). We are in search of a talented AI Research Engineer with a strong focus on adaptive learning methodologies, including meta-learning and hypernetworks. Your role will encompass the design and implementation of advanced architectures for dynamic model adaptation, enhancing model reasoning capabilities, and effectively sharing insights with both research and engineering teams.
Essential Qualifications:
Demonstrated experience with LLMs or VLMs in both research and production environments.
Strong foundational knowledge in Natural Language Processing, Machine Learning, or related fields, particularly in language model development.
A proven history of tackling complex challenges in language understanding and generation, employing rigorous quantitative methods.
Exceptional communication skills for conveying research findings to varied technical audiences.
Proficiency in Python and familiarity with deep learning frameworks such as PyTorch, JAX, or TensorFlow, alongside experience in distributed training and model optimization.
Desirable Qualifications:
PhD in Computer Science, Computational Linguistics, or a closely related discipline with an emphasis on language models and adaptive learning frameworks.
Substantial research and engineering background with LLMs/VLMs, particularly in meta-learning or parameter-efficient adaptation, supported by grants, fellowships, patents, or contributions to open-source initiatives.
First-author publications in recognized peer-reviewed conferences (ACL, EMNLP, NeurIPS, ICML, ICLR) or journals focusing on language models, meta-learning, hypernetworks, or adaptive AI.
Preferred Technical Expertise:
In-depth research knowledge in LLM reasoning, hypernetworks, multi-task learning, meta-learning, and the design of innovative LLM adaptation techniques, including online continual learning.
