< Explain other AI papers

Rewiring Experts on the Fly:Continuous Rerouting for Better Online Adaptation in Mixture-of-Expert models

Guinan Su, Yanwu Yang, Li Shen, Lu Yin, Shiwei Liu, Jonas Geiping

2025-10-20

Rewiring Experts on the Fly:Continuous Rerouting for Better Online Adaptation in Mixture-of-Expert models

Summary

This paper introduces a new way to improve large language models called Mixture-of-Experts (MoE) when they're used in the real world, specifically when the type of information they're processing changes. It focuses on making the model better at choosing which parts of itself to use for different tasks, without needing extra training data.

What's the problem?

MoE models are really good at getting bigger and more powerful, but they can struggle when the kind of data they're working with is different from what they were originally trained on. Imagine a model trained mostly on news articles suddenly being asked to write poetry – it might not perform well. Existing methods to fix this usually require extra, labeled data which isn't always available or practical for these large models.

What's the solution?

The researchers came up with a method that adapts the MoE model *while* it's generating text, without needing any new data. It works by constantly tweaking how the model decides which 'expert' to use, based on the text it has already created. Think of it like the model learning from its own output to improve its choices. They do this by making small adjustments to the model's internal settings, focusing on the parts that control which experts are activated, and they only update these settings periodically to avoid changing the model too much at once.

Why it matters?

This is important because it makes MoE models more reliable and adaptable in real-world situations. It means these powerful models can handle a wider range of tasks and changing information without needing constant retraining. The method is also easy to add to existing techniques for improving model performance, leading to even better results on challenging problems like coding and reasoning.

Abstract

Mixture-of-Experts (MoE) models achieve efficient scaling through sparse expert activation, but often suffer from suboptimal routing decisions due to distribution shifts in deployment. While existing test-time adaptation methods could potentially address these issues, they primarily focus on dense models and require access to external data, limiting their practical applicability to MoE architectures. However, we find that, instead of relying on reference data, we can optimize MoE expert selection on-the-fly based only on input context. As such, we propose a data-free, online test-time framework that continuously adapts MoE routing decisions during text generation without external supervision or data. Our method cycles between two phases: During the prefill stage, and later in regular intervals, we optimize the routing decisions of the model using self-supervision based on the already generated sequence. Then, we generate text as normal, maintaining the modified router until the next adaption. We implement this through lightweight additive vectors that only update router logits in selected layers, maintaining computational efficiency while preventing over-adaptation. The experimental results show consistent performance gains on challenging reasoning tasks while maintaining robustness to context shifts. For example, our method achieves a 5.5\% improvement on HumanEval with OLMoE. Furthermore, owing to its plug-and-play property, our method naturally complements existing test-time scaling techniques, e.g., achieving 6\% average gains when incorporated with self-consistency on DeepSeek-V2-Lite.