SpargeAttn: Accurate Sparse Attention Accelerating Any Model Inference
Jintao Zhang, Chendong Xiang, Haofeng Huang, Jia Wei, Haocheng Xi, Jun Zhu, Jianfei Chen
2025-02-26
Summary
This paper talks about SpargeAttn, a new method to make AI models run faster by being smarter about which parts of information they focus on
What's the problem?
Big AI models are really slow because they spend a lot of time looking at every piece of information, even when some of it isn't important. Current solutions to make this faster only work for specific types of AI tasks, not for all kinds of models
What's the solution?
The researchers created SpargeAttn, which works for any type of AI model. It uses a two-step process to quickly figure out which parts of the information are important and which can be ignored. This allows the AI to skip unnecessary calculations, making it much faster without losing accuracy
Why it matters?
This matters because it can make all kinds of AI models run much faster, from those that work with text to those that handle images and videos. Faster AI models can be used in more places and for more tasks, potentially leading to new applications and improvements in areas like language translation, image generation, and video processing
Abstract
An efficient attention implementation is essential for large models due to its quadratic time complexity. Fortunately, attention commonly exhibits sparsity, i.e., many values in the attention map are near zero, allowing for the omission of corresponding computations. Many studies have utilized the sparse pattern to accelerate attention. However, most existing works focus on optimizing attention within specific models by exploiting certain sparse patterns of the attention map. A universal sparse attention that guarantees both the speedup and end-to-end performance of diverse models remains elusive. In this paper, we propose SpargeAttn, a universal sparse and quantized attention for any model. Our method uses a two-stage online filter: in the first stage, we rapidly and accurately predict the attention map, enabling the skip of some matrix multiplications in attention. In the second stage, we design an online softmax-aware filter that incurs no extra overhead and further skips some matrix multiplications. Experiments show that our method significantly accelerates diverse models, including language, image, and video generation, without sacrificing end-to-end metrics. The codes are available at https://github.com/thu-ml/SpargeAttn.