DataChef: Cooking Up Optimal Data Recipes for LLM Adaptation via Reinforcement Learning
Yicheng Chen, Zerun Ma, Xinchen Xie, Yining Li, Kai Chen
2026-02-12
Summary
This paper focuses on automatically creating the best way to prepare data for training large language models (LLMs), instead of relying on humans to design this process manually.
What's the problem?
Currently, getting LLMs to perform well requires a lot of carefully chosen and prepared training data. The 'recipe' for preparing this data – deciding what data to use, how to clean it, and how to format it – is usually created by experts through a lot of trial and error. This is slow, expensive, and requires specialized knowledge, and as LLMs get more complex, it's becoming a major bottleneck.
What's the solution?
The researchers developed a system called DataChef-32B that uses reinforcement learning to automatically design these data recipes. It's given a specific task and a collection of potential data sources, and then it learns to combine and process those sources in a way that maximizes the LLM's performance on that task. It does this by predicting how well a recipe will work *before* actually training the LLM, which speeds up the process. They showed it works well on several different tasks, even outperforming recipes created by human experts in some cases, like improving a model's math abilities.
Why it matters?
This work is important because it moves us closer to automating the entire LLM training process. If we can automatically create good data recipes, we can more easily adapt LLMs to new tasks and potentially create AI systems that can continuously improve themselves without constant human intervention. It’s a step towards building more self-sufficient and evolving AI.
Abstract
In the current landscape of Large Language Models (LLMs), the curation of large-scale, high-quality training data is a primary driver of model performance. A key lever is the data recipe, which comprises a data processing pipeline to transform raw sources into training corpora. Despite the growing use of LLMs to automate individual data processing steps, such as data synthesis and filtering, the overall design of data recipes remains largely manual and labor-intensive, requiring substantial human expertise and iteration. To bridge this gap, we formulate end-to-end data recipe generation for LLM adaptation. Given a target benchmark and a pool of available data sources, a model is required to output a complete data recipe that adapts a base LLM to the target task. We present DataChef-32B, which performs online reinforcement learning using a proxy reward that predicts downstream performance for candidate recipes. Across six held-out tasks, DataChef-32B produces practical recipes that reach comparable downstream performance to those curated by human experts. Notably, the recipe from DataChef-32B adapts Qwen3-1.7B-Base to the math domain, achieving 66.7 on AIME'25 and surpassing Qwen3-1.7B. This work sheds new light on automating LLM training and developing self-evolving AI systems.