daVinci-LLM:Towards the Science of Pretraining
Yiwei Qin, Yixiu Liu, Tiantian Mi, Muhang Xie, Zhen Huang, Weiye Si, Pengrui Lu, Siyuan Feng, Xia Wu, Liming Liu, Ye Luo, Jinlong Hou, Qipeng Guo, Yu Qiao, Pengfei Liu
2026-04-01
Summary
This research paper investigates the initial 'pretraining' phase of large language models, which is crucial for their overall performance, but hasn't been studied thoroughly. The authors built and openly shared a large language model and all the details of how it was created to help other researchers understand and improve this process.
What's the problem?
Currently, there's a gap in our understanding of how to best 'pretrain' these models. Companies with the resources to do large-scale pretraining often keep their methods secret for competitive reasons, while academic researchers lack the necessary computing power. This makes it hard to systematically study what works and what doesn't during this foundational stage. The field also lacks a standard, organized way to handle and prepare the massive amounts of data needed for pretraining.
What's the solution?
The researchers created daVinci-LLM, a 3 billion parameter language model, and made *everything* about its creation public – the data they used, how they processed it, the training process itself, and the results of many experiments. They used a framework called 'Data Darwinism' to systematically improve the data quality in stages, and they trained the model in two phases, first on basic skills and then on more complex reasoning. They also ran over 200 different tests, changing various aspects of the process to see how they affected performance.
Why it matters?
This work is important because it provides a fully transparent and reproducible example of large language model pretraining. By sharing all their data and methods, the researchers are allowing the entire AI community to learn from their findings and build upon their work, ultimately leading to better and more capable language models. It also highlights the importance of carefully processing data and adapting training strategies based on the type of data being used.
Abstract
The foundational pretraining phase determines a model's capability ceiling, as post-training struggles to overcome capability foundations established during pretraining, yet it remains critically under-explored. This stems from a structural paradox: organizations with computational resources operate under commercial pressures that inhibit transparent disclosure, while academic institutions possess research freedom but lack pretraining-scale computational resources. daVinci-LLM occupies this unexplored intersection, combining industrial-scale resources with full research freedom to advance the science of pretraining. We adopt a fully-open paradigm that treats openness as scientific methodology, releasing complete data processing pipelines, full training processes, and systematic exploration results. Recognizing that the field lacks systematic methodology for data processing, we employ the Data Darwinism framework, a principled L0-L9 taxonomy from filtering to synthesis. We train a 3B-parameter model from random initialization across 8T tokens using a two-stage adaptive curriculum that progressively shifts from foundational capabilities to reasoning-intensive enhancement. Through 200+ controlled ablations, we establish that: processing depth systematically enhances capabilities, establishing it as a critical dimension alongside volume scaling; different domains exhibit distinct saturation dynamics, necessitating adaptive strategies from proportion adjustments to format shifts; compositional balance enables targeted intensification while preventing performance collapse; how evaluation protocol choices shape our understanding of pretraining progress. By releasing the complete exploration process, we enable the community to build upon our findings and systematic methodologies to form accumulative scientific knowledge in pretraining.