MoBA: Mixture of Block Attention for Long-Context LLMs
Enzhe Lu, Zhejun Jiang, Jingyuan Liu, Yulun Du, Tao Jiang, Chao Hong, Shaowei Liu, Weiran He, Enming Yuan, Yuzhi Wang, Zhiqi Huang, Huan Yuan, Suting Xu, Xinran Xu, Guokun Lai, Yanru Chen, Huabin Zheng, Junjie Yan, Jianlin Su, Yuxin Wu, Neo Y. Zhang, Zhilin Yang
2025-02-24
Summary
This paper talks about MoBA (Mixture of Block Attention), a new way to help AI language models handle very long texts more efficiently without losing their ability to understand and process information accurately.
What's the problem?
As AI language models get bigger and smarter, they need to work with longer pieces of text. But the current way they process this information (called attention) becomes extremely slow and uses too much computer power when dealing with very long texts. Some solutions to this problem either force the AI to look at specific parts of the text or change how it processes information too much, which can make the AI less effective at complex tasks.
What's the solution?
The researchers created MoBA, which divides long texts into smaller blocks and teaches the AI to focus only on the most important blocks for each part of its task. This new method allows the AI to switch between looking at all the text (full attention) and only the important parts (sparse attention) as needed. MoBA follows a 'less structure' approach, which means it lets the AI figure out what's important on its own instead of being told where to focus.
Why it matters?
This matters because it helps AI language models work with much longer texts without slowing down or losing accuracy. This is a big step towards creating AI that can handle more complex, human-like tasks that require understanding and processing large amounts of information. MoBA has already been used in real-world applications, showing it can make a practical difference in how AI systems work with long texts.
Abstract
Scaling the effective context length is essential for advancing large language models (LLMs) toward artificial general intelligence (AGI). However, the quadratic increase in computational complexity inherent in traditional attention mechanisms presents a prohibitive overhead. Existing approaches either impose strongly biased structures, such as sink or window attention which are task-specific, or radically modify the attention mechanism into linear approximations, whose performance in complex reasoning tasks remains inadequately explored. In this work, we propose a solution that adheres to the ``less structure'' principle, allowing the model to determine where to attend autonomously, rather than introducing predefined biases. We introduce Mixture of Block Attention (MoBA), an innovative approach that applies the principles of Mixture of Experts (MoE) to the attention mechanism. This novel architecture demonstrates superior performance on long-context tasks while offering a key advantage: the ability to seamlessly transition between full and sparse attention, enhancing efficiency without the risk of compromising performance. MoBA has already been deployed to support Kimi's long-context requests and demonstrates significant advancements in efficient attention computation for LLMs. Our code is available at https://github.com/MoonshotAI/MoBA.