Expert Upcycling: Shifting the Compute-Efficient Frontier of Mixture-of-Experts
Chaitanya Dwivedi, Binxuan Huang, Himanshu Gupta, Pratik Jayarao, Neeraj Varshney, Bing Yin
2026-04-23
Summary
This paper introduces a new technique called 'expert upcycling' to make training really large language models, specifically those using a 'mixture-of-experts' approach, more efficient and less computationally expensive.
What's the problem?
Large language models are getting bigger and better, but scaling them up is really hard. The 'mixture-of-experts' design helps by only using parts of the model for each task, but even then, training these models requires a lot of memory and communication between computers, making it slow and costly. Simply adding more experts to improve performance increases these costs significantly.
What's the solution?
Expert upcycling tackles this by starting with a trained model and then *expanding* its capacity by duplicating existing experts and adjusting how information is routed to them. Think of it like adding more specialized workers to a team, but instead of hiring new ones, you're training existing workers to focus on different areas. This duplication provides a good starting point for the expanded model, so it learns faster than if it started from scratch. They also developed a smart way to choose *which* experts to duplicate, focusing on the ones that are most useful, and showed that this method saves a significant amount of computing time while maintaining performance.
Why it matters?
This research is important because it offers a practical way to build even larger and more powerful language models without needing a massive increase in computing resources. It’s a more efficient alternative to training everything from the beginning, potentially making advanced AI more accessible and accelerating progress in the field. It provides a recipe for how to effectively scale up these models, which is crucial for future development.
Abstract
Mixture-of-Experts (MoE) has become the dominant architecture for scaling large language models: frontier models routinely decouple total parameters from per-token computation through sparse expert routing. Scaling laws show that under fixed active computation, model quality scales predictably with total parameters, and MoEs realize this by increasing expert count. However, training large MoEs is expensive, as memory requirements and inter-device communication both scale with total parameter count. We propose expert upcycling, a method for progressively expanding MoE capacity by increasing the number of experts during continued pre-training (CPT). Given a trained E-expert model, the upcycling operator constructs an mE-expert model through expert duplication and router extension while holding top-K routing fixed, preserving per-token inference cost. Duplication provides a warm initialization: the expanded model inherits the source checkpoint's learned representations, starting from a substantially lower loss than random initialization. Subsequent CPT then breaks the symmetry among duplicated experts to drive specialization. We formalize the upcycling operator and develop a theoretical framework decomposing the quality gap into a capacity term and an initialization term. We further introduce utility-based expert selection, which uses gradient-based importance scores to guide non-uniform duplication, more than tripling gap closure when CPT is limited. In our 7B-13B total parameter experiments, the upcycled model matches the fixed-size baseline on validation loss while saving 32% of GPU hours. Comprehensive ablations across model scales, activation ratios, MoE architectures, and training budgets yield a practical recipe for deploying expert upcycling, establishing it as a principled, compute-efficient alternative to training large MoE models from scratch.