RegMix: Data Mixture as Regression for Language Model Pre-training
Qian Liu, Xiaosen Zheng, Niklas Muennighoff, Guangtao Zeng, Longxu Dou, Tianyu Pang, Jing Jiang, Min Lin
2024-07-02
Summary
This paper talks about RegMix, a new method for improving how large language models (LLMs) are trained by optimizing the mixture of data used during their pre-training. It focuses on finding the best combination of different types of data to enhance model performance.
What's the problem?
When training LLMs, the mixture of data used can greatly affect how well the model performs. However, figuring out which combination of data works best is often unclear and challenging. Many existing methods rely on human judgment, which can be inefficient and inconsistent, leading to suboptimal training outcomes.
What's the solution?
To solve this problem, the authors propose RegMix, which treats the process of selecting data mixtures as a regression task. This involves training several small models on different combinations of data and then using their performance to create a regression model that predicts how well each mixture will perform. With this information, they can simulate the best mixture and use it to train a larger model that requires much more computational power. They validated their approach by training 512 smaller models and found that using the optimal mixture resulted in better performance than traditional methods while using only 10% of the computational resources.
Why it matters?
This research is important because it provides a more efficient way to determine the best data mixtures for training LLMs. By automating this process, RegMix can lead to better-performing models without needing excessive computational resources or relying solely on human selection. This advancement could significantly improve various applications of AI, making them more effective and accessible.
Abstract
The data mixture for large language model pre-training significantly impacts performance, yet how to determine an effective mixture remains unclear. We propose RegMix to automatically identify a high-performing data mixture by formulating it as a regression task. RegMix involves training a set of small models with diverse data mixtures and fitting a regression model to predict their performance given their respective mixtures. With the fitted regression model, we simulate the top-ranked mixture and use it to train a large-scale model with orders of magnitude more compute. To empirically validate RegMix, we train 512 models with 1M parameters for 1B tokens of different mixtures to fit the regression model and find the optimal mixture. Using this mixture we train a 1B parameter model for 25B tokens (i.e. 1000x larger and 25x longer) which we find performs best among 64 candidate 1B parameter models with other mixtures. Further, our method demonstrates superior performance compared to human selection and achieves results that match or surpass DoReMi, while utilizing only 10% of the compute budget. Our experiments also show that (1) Data mixtures significantly impact performance with single-task performance variations of up to 14.6%; (2) Web corpora rather than data perceived as high-quality like Wikipedia have the strongest positive correlation with downstream performance; (3) Domains interact in complex ways often contradicting common sense, thus automatic approaches like RegMix are needed; (4) Data mixture effects transcend scaling laws, and our approach captures the complexity by considering all domains together. Our code is available at https://github.com/sail-sg/regmix.