DataComp-LM: In search of the next generation of training sets for language models
Jeffrey Li, Alex Fang, Georgios Smyrnis, Maor Ivgi, Matt Jordan, Samir Gadre, Hritik Bansal, Etash Guha, Sedrick Keh, Kushal Arora, Saurabh Garg, Rui Xin, Niklas Muennighoff, Reinhard Heckel, Jean Mercat, Mayee Chen, Suchin Gururangan, Mitchell Wortsman, Alon Albalak, Yonatan Bitton, Marianna Nezhurina, Amro Abbas
2024-06-18

Summary
This paper introduces DataComp for Language Models (DCLM), a new framework designed to improve the training of language models by focusing on the quality and composition of the datasets used. It provides a large collection of text data and tools for researchers to create better training sets.
What's the problem?
Current datasets used to train language models are often large but may not be diverse or high-quality enough to help these models understand human language effectively. This can lead to models that perform poorly on real-world tasks because they haven't been trained on the right kinds of data. Additionally, details about what makes a good training dataset are often unclear, making it hard for researchers to know how to improve their models.
What's the solution?
To address these issues, the authors created DCLM, which includes a standardized collection of 240 trillion tokens (pieces of text) from Common Crawl, along with effective training recipes and a variety of evaluation methods. Researchers can experiment with different strategies for curating their datasets, such as removing duplicates, filtering out low-quality data, and mixing different types of data. The study found that using model-based filtering is crucial for creating high-quality training sets. They developed a baseline dataset called DCLM-Baseline that allows for training a large language model effectively, achieving significant improvements in performance compared to previous models.
Why it matters?
This research is important because it emphasizes the role of dataset design in developing better language models. By providing tools and guidelines for creating high-quality datasets, DCLM aims to enhance the capabilities of language models in understanding and generating human language. This could lead to more accurate AI applications in areas like translation, chatbots, and content generation.
Abstract
We introduce DataComp for Language Models (DCLM), a testbed for controlled dataset experiments with the goal of improving language models. As part of DCLM, we provide a standardized corpus of 240T tokens extracted from Common Crawl, effective pretraining recipes based on the OpenLM framework, and a broad suite of 53 downstream evaluations. Participants in the DCLM benchmark can experiment with data curation strategies such as deduplication, filtering, and data mixing at model scales ranging from 412M to 7B parameters. As a baseline for DCLM, we conduct extensive experiments and find that model-based filtering is key to assembling a high-quality training set. The resulting dataset, DCLM-Baseline enables training a 7B parameter language model from scratch to 64% 5-shot accuracy on MMLU with 2.6T training tokens. Compared to MAP-Neo, the previous state-of-the-art in open-data language models, DCLM-Baseline represents a 6.6 percentage point improvement on MMLU while being trained with 40% less compute. Our baseline model is also comparable to Mistral-7B-v0.3 and Llama 3 8B on MMLU (63% & 66%), and performs similarly on an average of 53 natural language understanding tasks while being trained with 6.6x less compute than Llama 3 8B. Our results highlight the importance of dataset design for training language models and offer a starting point for further research on data curation.