Role Overview
As an Applied Scientist, you’ll be pushing the frontier of what large language models can do. You will be training frontier-scale, task-specific models for our customers that power new capabilities & unlock workflows that were not possible a few months ago. Your work will sit at the intersection of cutting-edge research and real-world impact
In this role, you’ll fine-tune LLMs for blazing speed, efficiency, and performance across high-value tasks such as search and code generation. You’ll explore & develop novel algorithmic ideas and training techniques push model quality and training stability.
Ideal Profile
- Past LLM Finetuning Experience. You have fine-tuned LLMs with libraries like TRL, verl, slime, etc. before for specific tasks e.g. codegen, etc.
- Algorithmic Fundamentals. You have good intuition around the internals of LLMs & different finetuning techniques. You understand how tokenizers work, the difference between SFT & RL, differences between different GRPO variants. You also understand what different metrics like KL-divergence and entropy mean & how to debug training behavior with them.
- Data Science Fundamentals. You know how to design clean train/validation/test splits, constructing reliable evaluation sets/metrics that reflect real user tasks. You can diagnose whether performance changes are due to data, training dynamics, or evaluation noise, and you’re systematic about experiment design and measurement.
- Strong product sense around LLMs. You evaluate whether a model is actually getting better at what customers care about. You think in edge cases and failure modes.
- Good communicator. You can explain technical RL concepts in simple words to technical audiences without an ML background
- Strong ownership. You don't wait for permission- you go figure out what the customer needs and make it happen.