Recycling Pretrained Checkpoints: Orthogonal Growth of Mixture-of-Experts for Efficient Large Language Model Pre-Training
Ruizhe Wang, Yucheng Ding, Xiao Liu, Yaoxiang Wang, Peng Cheng, Baining Guo, Zhengjun Zha, Yeyun Gong
2025-10-10
Summary
This paper explores a way to make training really large language models cheaper and more effective by building on work that's already been done, instead of always starting from scratch.
What's the problem?
Training these massive language models requires huge amounts of computing power and money. A lot of resources are already spent creating good starting points – these are like 'sunk costs' – but they often aren't fully used because of technical limitations or because we want even bigger models. It's wasteful to ignore all that previous effort.
What's the solution?
The researchers came up with a method called 'checkpoint recycling'. Basically, they take a pre-trained model and expand it, either by adding more layers (making it deeper) or by duplicating existing parts and adding slight variations (making it wider). They figured out the best time to do this expansion during the training process, and found that the more they built on existing work, the better the final model performed. They tested this on models with 70 billion parameters, using a massive amount of text data.
Why it matters?
This research is important because it offers a way to significantly reduce the cost of training large language models. By reusing previous work, they achieved a substantial improvement in accuracy compared to training a model from zero, while using the same amount of additional computing power. This makes it more feasible to create even more powerful AI systems without breaking the bank.
Abstract
The rapidly increasing computational cost of pretraining Large Language Models necessitates more efficient approaches. Numerous computational costs have been invested in existing well-trained checkpoints, but many of them remain underutilized due to engineering constraints or limited model capacity. To efficiently reuse this "sunk" cost, we propose to recycle pretrained checkpoints by expanding their parameter counts and continuing training. We propose orthogonal growth method well-suited for converged Mixture-of-Experts model: interpositional layer copying for depth growth and expert duplication with injected noise for width growth. To determine the optimal timing for such growth across checkpoints sequences, we perform comprehensive scaling experiments revealing that the final accuracy has a strong positive correlation with the amount of sunk cost, indicating that greater prior investment leads to better performance. We scale our approach to models with 70B parameters and over 1T training tokens, achieving 10.66% accuracy gain over training from scratch under the same additional compute budget. Our checkpoint recycling approach establishes a foundation for economically efficient large language model pretraining.