Posted on 2026/02/14
AI Researcher, Core ML (Turbo)
Together AI
San Francisco, CA, United States
Job highlights Identified by Google from the original job post Qualifications • People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack • The closer you are to fullstack (inference + posttraining/RL + systems), the stronger the fit-but being spiky in one area and eager to grow is absolutely okay • Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others: • Systemsfirst profile: Largescale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving • RLfirst profile: RL / posttraining for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPOlike methods, reward modeling), and using these to train or finetune real models • Model architecture design for Transformers or other large neural nets • Distributed systems / highperformance computing for ML • Are comfortable working from algorithms to engines: • Strong coding ability in Python • Experience profiling and optimizing performance across GPU, networking, and memory layers • Track record of impactful work in ML systems, RL, or largescale model training (papers, opensource projects, or production systems) • Can read new RL / posttraining papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API) • Operate well as a fullstack problem solver: • You naturally ask: "Where in the stack is this really bottlenecked?" • You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and uservisible wins • 3+ years of experience working on ML systems, largescale model training, inference, or adjacent areas (or equivalent experience via research / open source) • Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience • Demonstrated experience owning complex technical projects endtoend • 15 more items(s) Benefits • We offer competitive compensation, startup equity, health insurance and other competitive benefits • The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits • Our salary ranges are determined by location, level and role Responsibilities • Our mandate is to push the frontier of efficient inference and RLdriven training: making models dramatically faster and cheaper to run, while improving their capabilities through RLbased posttraining (e.g., GRPOstyle objectives) • This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack • Much of the job is modifying production inference systems-for example, SGLang or vLLMstyle serving stacks and speculative decoding systems such as ATLAS-grounded in a strong understanding of posttraining and inference theory, rather than purely theoretical algorithm design • You'll work across the stack-from RL algorithms and training engines to kernels and serving systems-to build and improve frontier models via RL pipelines • Depth in one of these areas plus appetite to collaborate across (and grow toward more fullstack ownership over time) is ideal • Able to take a new sampling method, scheduler, or RL update and turn it into a productiongrade implementation in the engine and/or training stack • Have a solid research foundation in your area(s) of depth: • Advance inference efficiency endtoend • Design and prototype algorithms, architectures, and scheduling strategies for lowlatency, highthroughput inference • Implement and maintain changes in highperformance inference engines (e.g., SGLang or vLLMstyle systems and Together's inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc • Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost • Unify inference with RL / posttraining • Design and operate RL and posttraining pipelines (e.g., RLHF, RLAIF, GRPO, DPOstyle methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems • Make RL and posttraining workloads more efficient with inferenceaware training loops-for example, async RL rollouts, speculative decoding, and other techniques that make largescale rollout collection and evaluation cheaper • Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack • Codesign algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and userfacing layers • Run ablations and scaleup experiments to understand tradeoffs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design • Own critical systems at production scale • Profile, debug, and optimize inference and posttraining services under real production workloads • Drive roadmap items that require real engine modification-changing kernels, memory layouts, scheduling logic, and APIs as needed • Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously • Provide technical leadership (Staff level) • Set technical direction for crossteam efforts at the intersection of inference, RL, and posttraining • Mentor other engineers and researchers on fullstack ML systems work and performance engineering • 21 more items(s) More job highlights Job description About the Role
The Turbo team sits at the intersection of efficient inference (algorithms, architectures, engines) and posttraining / RL systems.
We build and operate the systems behind Together's API, including highperformance inference and RL/posttraining engines that can run at production scale.
Our mandate is to push the frontier of efficient inference and RLdriven training: making models dr...amatically faster and cheaper to run, while improving their capabilities through RLbased posttraining (e.g., GRPOstyle objectives).
This work lives at the interface of algorithms and systems: asynchronous RL, rollout collection, scheduling, and batching all interact with engine design, creating many knobs to tune across the RL algorithm, training loop, and inference stack.
Much of the job is modifying production inference systems-for example, SGLang or vLLMstyle serving stacks and speculative decoding systems such as ATLAS-grounded in a strong understanding of posttraining and inference theory, rather than purely theoretical algorithm design.
You'll work across the stack-from RL algorithms and training engines to kernels and serving systems-to build and improve frontier models via RL pipelines.
People on this team are often spiky: some are more RLfirst, some are more systemsfirst.
Depth in one of these areas plus appetite to collaborate across (and grow toward more fullstack ownership over time) is ideal.
Requirements
We don't expect anyone to check every box below.
People on this team typically have deep expertise in one or more areas and enough breadth (or interest) to work effectively across the stack.
The closer you are to fullstack (inference + posttraining/RL + systems), the stronger the fit-but being spiky in one area and eager to grow is absolutely okay.
You might be a good fit if you:
• Have strong expertise in at least one of the following, and are excited to collaborate across (and grow into) the others:
• Systemsfirst profile: Largescale inference systems (e.g., SGLang, vLLM, FasterTransformer, TensorRT, custom engines, or similar), GPU performance, distributed serving.
• RLfirst profile: RL / posttraining for LLMs or large models (e.g., GRPO, RLHF/RLAIF, DPOlike methods, reward modeling), and using these to train or finetune real models.
• Model architecture design for Transformers or other large neural nets.
• Distributed systems / highperformance computing for ML.
• Are comfortable working from algorithms to engines:
• Strong coding ability in Python
• Experience profiling and optimizing performance across GPU, networking, and memory layers.
• Able to take a new sampling method, scheduler, or RL update and turn it into a productiongrade implementation in the engine and/or training stack.
• Have a solid research foundation in your area(s) of depth:
• Track record of impactful work in ML systems, RL, or largescale model training (papers, opensource projects, or production systems).
• Can read new RL / posttraining papers, understand their implications on the stack, and design minimal, correct changes in the right layer (training engine vs. inference engine vs. data / API).
• Operate well as a fullstack problem solver:
• You naturally ask: "Where in the stack is this really bottlenecked?"
• You enjoy collaborating with infra, research, and product teams, and you care about both scientific quality and uservisible wins.
Minimum qualifications
• 3+ years of experience working on ML systems, largescale model training, inference, or adjacent areas (or equivalent experience via research / open source).
• Advanced degree in Computer Science, EE, or a related field, or equivalent practical experience.
• Demonstrated experience owning complex technical projects endtoend.
If you're excited about the role and strong in some of these areas, we encourage you to apply even if you don't meet every single requirement.
Responsibilities
• Advance inference efficiency endtoend
• Design and prototype algorithms, architectures, and scheduling strategies for lowlatency, highthroughput inference.
• Implement and maintain changes in highperformance inference engines (e.g., SGLang or vLLMstyle systems and Together's inference stack), including kernel backends, speculative decoding (e.g., ATLAS), quantization, etc.
• Profile and optimize performance across GPU, networking, and memory layers to improve latency, throughput, and cost.
• Unify inference with RL / posttraining
• Design and operate RL and posttraining pipelines (e.g., RLHF, RLAIF, GRPO, DPOstyle methods, reward modeling) where 90+% of the cost is inference, jointly optimizing algorithms and systems.
• Make RL and posttraining workloads more efficient with inferenceaware training loops-for example, async RL rollouts, speculative decoding, and other techniques that make largescale rollout collection and evaluation cheaper.
• Use these pipelines to train, evaluate, and iterate on frontier models on top of our inference stack.
• Codesign algorithms and infrastructure so that objectives, rollout collection, and evaluation are tightly coupled to efficient inference, and quickly identify bottlenecks across the training engine, inference engine, data pipeline, and userfacing layers.
• Run ablations and scaleup experiments to understand tradeoffs between model quality, latency, throughput, and cost, and feed these insights back into model, RL, and system design.
• Own critical systems at production scale
• Profile, debug, and optimize inference and posttraining services under real production workloads.
• Drive roadmap items that require real engine modification-changing kernels, memory layouts, scheduling logic, and APIs as needed.
• Establish metrics, benchmarks, and experimentation frameworks to validate improvements rigorously.
• Provide technical leadership (Staff level)
• Set technical direction for crossteam efforts at the intersection of inference, RL, and posttraining.
• Mentor other engineers and researchers on fullstack ML systems work and performance engineering.
About Together AI
Together AI is a research-driven artificial intelligence company.
We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models.
We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.
Compensation
We offer competitive compensation, startup equity, health insurance and other competitive benefits.
The US base salary range for this full-time position is: $200,000 - $280,000 + equity + benefits.
Our salary ranges are determined by location, level and role.
Individual compensation will be determined by experience, skills, and job-related knowledge.
Equal Opportunity
Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Please see our privacy policy at https://www.together.ai/privacy Show full description

Zero to AI Engineer
Skip the degree. Learn real-world AI skills used by AI researchers and engineers. Get certified in 8 weeks or less. No experience required.
Find AI, ML, Data Science Jobs By Location
Find Jobs By Position