< Explain other AI papers

MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression

Tianyu Fu, Haofeng Huang, Xuefei Ning, Genghan Zhang, Boju Chen, Tianqi Wu, Hongyi Wang, Zixiao Huang, Shiyao Li, Shengen Yan, Guohao Dai, Huazhong Yang, Yu Wang

2024-06-28

MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression

Summary

This paper talks about MoA, which stands for Mixture of Attention. It is a new method designed to improve how large language models (LLMs) handle long pieces of text by using a more flexible approach to attention, which helps the models work faster and use less memory.

What's the problem?

Large language models need to process a lot of information at once, especially when dealing with long texts. Traditional methods use a uniform approach to 'sparse attention,' meaning they apply the same attention pattern across all parts of the model. This one-size-fits-all method doesn't take into account the different ways that various parts of the model might need to focus on information, which can lead to inefficiencies and slower performance.

What's the solution?

To solve this problem, the authors introduced MoA, which tailors different attention patterns for each part of the model. MoA creates a 'search space' of various attention configurations and tests them to find the best way to handle different lengths of input. This means that some parts of the model can focus on longer sequences while others can concentrate on shorter, more local contexts. The experiments showed that MoA significantly increases the effective context length and improves accuracy compared to traditional methods, while also reducing memory usage and speeding up processing.

Why it matters?

This research is important because it makes large language models more efficient and capable of handling longer texts without sacrificing performance. By optimizing how these models pay attention to different pieces of information, MoA can help improve applications like chatbots, translation services, and any technology that relies on understanding complex or lengthy information.

Abstract

Sparse attention can effectively mitigate the significant memory and throughput demands of Large Language Models (LLMs) in long contexts. Existing methods typically employ a uniform sparse attention mask, applying the same sparse pattern across different attention heads and input lengths. However, this uniform approach fails to capture the diverse attention patterns inherent in LLMs, ignoring their distinct accuracy-latency trade-offs. To address this challenge, we propose the Mixture of Attention (MoA), which automatically tailors distinct sparse attention configurations to different heads and layers. MoA constructs and navigates a search space of various attention patterns and their scaling rules relative to input sequence lengths. It profiles the model, evaluates potential configurations, and pinpoints the optimal sparse attention compression plan. MoA adapts to varying input sizes, revealing that some attention heads expand their focus to accommodate longer sequences, while other heads consistently concentrate on fixed-length local contexts. Experiments show that MoA increases the effective context length by 3.9times with the same average attention span, boosting retrieval accuracy by 1.5-7.1times over the uniform-attention baseline across Vicuna-7B, Vicuna-13B, and Llama3-8B models. Moreover, MoA narrows the capability gaps between sparse and dense models, reducing the maximum relative performance drop from 9%-36% to within 5% across two long-context understanding benchmarks. MoA achieves a 1.2-1.4times GPU memory reduction and boosts decode throughput by 5.5-6.7 times for 7B and 13B dense models on a single GPU, with minimal impact on performance.