A Closer Look into Mixture-of-Experts in Large Language Models
Ka Man Lo, Zeyu Huang, Zihan Qiu, Zili Wang, Jie Fu
2024-06-27

Summary
This paper explores the Mixture-of-Experts (MoE) architecture in large language models (LLMs), focusing on how it improves performance by using a selective approach to activate different parts of the model for each task.
What's the problem?
As language models become larger and more complex, it's important to find ways to make them efficient without losing performance. Traditional models use all their parameters for every task, which can be wasteful and slow. The MoE approach aims to address this by only activating a small number of model parameters (or 'experts') for each specific input, but the exact workings of this system are not fully understood yet.
What's the solution?
The authors of the paper investigate how MoE-based models operate by studying three recent models that use this architecture. They found that certain neurons within the model act like specialized experts for specific tasks. They also discovered that the routing mechanism, which decides which experts to activate based on the input, tends to select those with stronger outputs. Additionally, they noted that as you move deeper into the layers of the model, there is more diversity among the experts used. The paper provides insights and suggestions for improving how these routers are designed and how experts are allocated in future MoE implementations.
Why it matters?
This research is important because it helps clarify how Mixture-of-Experts works in large language models, which can lead to better designs and more efficient AI systems. By understanding these mechanisms, researchers can improve model performance while keeping computational costs low, making advanced AI technologies more accessible and effective across various applications.
Abstract
Mixture-of-experts (MoE) is gaining increasing attention due to its unique properties and remarkable performance, especially for language tasks. By sparsely activating a subset of parameters for each token, MoE architecture could increase the model size without sacrificing computational efficiency, achieving a better trade-off between performance and training costs. However, the underlying mechanism of MoE still lacks further exploration, and its modularization degree remains questionable. In this paper, we make an initial attempt to understand the inner workings of MoE-based large language models. Concretely, we comprehensively study the parametric and behavioral features of three recent MoE-based models and reveal some intriguing observations, including (1) Neurons act like fine-grained experts. (2) The router of MoE usually selects experts with larger output norms. (3) The expert diversity increases as the layer increases, while the last layer is an outlier. Based on the observations, we also provide suggestions for a broad spectrum of MoE practitioners, such as router design and expert allocation. We hope this work could shed light on future research on the MoE framework and other modular architectures. Code is available at https://github.com/kamanphoebe/Look-into-MoEs.