Stabilizing MoE Reinforcement Learning by Aligning Training and Inference Routers
Wenhan Ma, Hailin Zhang, Liang Zhao, Yifan Song, Yudong Wang, Zhifang Sui, Fuli Luo
2025-10-27
Summary
This paper focuses on a problem that arises when trying to improve large language models using a technique called reinforcement learning, specifically when those models are built using a 'Mixture-of-Experts' structure.
What's the problem?
Large language models are becoming incredibly powerful, and reinforcement learning is a way to make them even better. However, these 'Mixture-of-Experts' models, which use different specialized parts to handle different tasks, often become unstable during reinforcement learning training and can even completely fail. The core issue is that the way the model decides *which* expert to use during training doesn't match how it decides during actual use, and even repeated attempts under the same conditions can lead to different experts being chosen, creating inconsistency.
What's the solution?
The researchers developed a method called 'Rollout Routing Replay' or R3. Essentially, R3 records the decisions the model makes about which experts to use when it's actually *using* the model (inference). Then, during training, it forces the model to replay those same decisions. This makes the training process more consistent with how the model will actually be used, reducing instability and preventing the training from collapsing.
Why it matters?
This work is important because it provides a way to reliably use reinforcement learning to improve 'Mixture-of-Experts' models. These models are at the forefront of AI development, and stabilizing their training opens the door to creating even more capable and useful language models. R3 outperforms existing methods for stabilizing this type of training, offering a new and effective solution.
Abstract
Reinforcement learning (RL) has emerged as a crucial approach for enhancing the capabilities of large language models. However, in Mixture-of-Experts (MoE) models, the routing mechanism often introduces instability, even leading to catastrophic RL training collapse. We analyze the training-inference consistency of MoE models and identify a notable discrepancy in routing behaviors between the two phases. Moreover, even under identical conditions, the routing framework can yield divergent expert selections across repeated forward passes. To address this foundational inconsistency, we propose Rollout Routing Replay (R3), a method that records routing distributions from the inference engine and replays them during training. R3 significantly reduces training-inference policy KL divergence and mitigates extreme discrepancies without compromising training speed. Extensive experiments on various settings confirm that R3 succeeds in stabilizing RL training, preventing collapse and outperforming methods such as GSPO and TIS. We believe this work can offer a new solution for stabilizing RL in MoE models.