MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention
Huiqiang Jiang, Yucheng Li, Chengruidong Zhang, Qianhui Wu, Xufang Luo, Surin Ahn, Zhenhua Han, Amir H. Abdi, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, Lili Qiu
2024-07-03

Summary
This paper talks about MInference, a new method that speeds up how large language models (LLMs) process long text inputs by using a technique called dynamic sparse attention.
What's the problem?
The main problem is that processing very long prompts (like 1 million tokens) with LLMs takes a lot of time—up to 30 minutes for an 8 billion parameter model. Current methods to make this faster often don't work well, either slowing down the process too much or reducing the accuracy of the results.
What's the solution?
To solve this issue, the authors developed MInference, which uses three specific patterns (A-shape, Vertical-Slash, and Block-Sparse) in the attention calculations of LLMs. By identifying the best pattern for each part of the model beforehand and then dynamically adjusting how these patterns are used during processing, MInference can perform calculations much faster. This method can reduce the time needed to process long prompts by up to 10 times without changing how the models were originally trained.
Why it matters?
This research is important because it makes it possible to use large language models more efficiently, especially as they are increasingly needed for tasks that require understanding long texts. By speeding up processing while keeping accuracy high, MInference helps make advanced AI tools more practical and accessible for various applications.
Abstract
The computational challenges of Large Language Model (LLM) inference remain a significant barrier to their widespread deployment, especially as prompt lengths continue to increase. Due to the quadratic complexity of the attention computation, it takes 30 minutes for an 8B LLM to process a prompt of 1M tokens (i.e., the pre-filling stage) on a single A100 GPU. Existing methods for speeding up prefilling often fail to maintain acceptable accuracy or efficiency when applied to long-context LLMs. To address this gap, we introduce MInference (Milliontokens Inference), a sparse calculation method designed to accelerate pre-filling of long-sequence processing. Specifically, we identify three unique patterns in long-context attention matrices-the A-shape, Vertical-Slash, and Block-Sparsethat can be leveraged for efficient sparse computation on GPUs. We determine the optimal pattern for each attention head offline and dynamically build sparse indices based on the assigned pattern during inference. With the pattern and sparse indices, we perform efficient sparse attention calculations via our optimized GPU kernels to significantly reduce the latency in the pre-filling stage of long-context LLMs. Our proposed technique can be directly applied to existing LLMs without any modifications to the pre-training setup or additional fine-tuning. By evaluating on a wide range of downstream tasks, including InfiniteBench, RULER, PG-19, and Needle In A Haystack, and models including LLaMA-3-1M, GLM4-1M, Yi-200K, Phi-3-128K, and Qwen2-128K, we demonstrate that MInference effectively reduces inference latency by up to 10x for pre-filling on an A100, while maintaining accuracy. Our code is available at https://aka.ms/MInference.