TÜLU 3: Pushing Frontiers in Open Language Model Post-Training
Nathan Lambert, Jacob Morrison, Valentina Pyatkin, Shengyi Huang, Hamish Ivison, Faeze Brahman, Lester James V. Miranda, Alisa Liu, Nouha Dziri, Shane Lyu, Yuling Gu, Saumya Malik, Victoria Graf, Jena D. Hwang, Jiangjiang Yang, Ronan Le Bras, Oyvind Tafjord, Chris Wilhelm, Luca Soldaini, Noah A. Smith, Yizhong Wang, Pradeep Dasigi
2024-11-25

Summary
This paper introduces TÜLU 3, a set of open-source models designed to improve the performance of language models through advanced post-training techniques, making them more effective for various tasks.
What's the problem?
Language models often need additional training after their initial development to refine their behavior and skills. However, most techniques for post-training are not openly available, which limits researchers' ability to improve these models. Additionally, the training data and methods used for post-training are often unclear, making it hard to replicate or build upon existing work.
What's the solution?
TÜLU 3 provides a comprehensive solution by offering fully open models along with their training data, code, and detailed recipes for post-training. The authors developed new training methods, including supervised fine-tuning, Direct Preference Optimization (DPO), and a novel approach called Reinforcement Learning with Verifiable Rewards (RLVR). These methods help the models learn new skills while maintaining their overall performance. TÜLU 3 also includes a multi-task evaluation system to assess how well the models perform on different tasks.
Why it matters?
This research is important because it democratizes access to advanced language model training techniques, allowing more people and organizations to enhance AI capabilities without needing expensive resources. By providing clear guidelines and open resources, TÜLU 3 encourages innovation in the field of AI and helps improve the performance of language models across various applications.
Abstract
Language model post-training is applied to refine behaviors and unlock new skills across a wide range of recent language models, but open recipes for applying these techniques lag behind proprietary ones. The underlying training data and recipes for post-training are simultaneously the most important pieces of the puzzle and the portion with the least transparency. To bridge this gap, we introduce T\"ULU 3, a family of fully-open state-of-the-art post-trained models, alongside its data, code, and training recipes, serving as a comprehensive guide for modern post-training techniques. T\"ULU 3, which builds on Llama 3.1 base models, achieves results surpassing the instruct versions of Llama 3.1, Qwen 2.5, Mistral, and even closed models such as GPT-4o-mini and Claude 3.5-Haiku. The training algorithms for our models include supervised finetuning (SFT), Direct Preference Optimization (DPO), and a novel method we call Reinforcement Learning with Verifiable Rewards (RLVR). With T\"ULU 3, we introduce a multi-task evaluation scheme for post-training recipes with development and unseen evaluations, standard benchmark implementations, and substantial decontamination of existing open datasets on said benchmarks. We conclude with analysis and discussion of training methods that did not reliably improve performance. In addition to the T\"ULU 3 model weights and demo, we release the complete recipe -- including datasets for diverse core skills, a robust toolkit for data curation and evaluation, the training code and infrastructure, and, most importantly, a detailed report for reproducing and further adapting the T\"ULU 3 approach to more domains.