SpargeAttention2: Trainable Sparse Attention via Hybrid Top-k+Top-p Masking and Distillation Fine-Tuning
Jintao Zhang, Kai Jiang, Chendong Xiang, Weiqi Feng, Yuezhou Hu, Haocheng Xi, Jianfei Chen, Jun Zhu
2026-02-20
Summary
This paper investigates ways to make diffusion models, which are used for generating images and videos, faster and more efficient by focusing on the 'attention' part of the model. Attention is a key component that helps the model focus on the most important parts of the data, but it can be computationally expensive.
What's the problem?
Existing methods for simplifying attention, called 'sparse attention', either don't allow for improvement through training or struggle to maintain quality when pushed to very high levels of simplification. Specifically, common methods like 'Top-k' and 'Top-p' masking have weaknesses, and simply fine-tuning sparse attention with the standard diffusion training process doesn't always work well to preserve the quality of the generated content. The researchers wanted to understand *why* these problems occur and how to overcome them.
What's the solution?
The researchers developed a new method called SpargeAttention2. This method combines the strengths of both 'Top-k' and 'Top-p' masking to create a more reliable way to simplify attention, even at very high simplification levels. They also created a more efficient way to actually *train* the sparse attention and used a special training technique, inspired by 'distillation', to help the model maintain its ability to generate high-quality images and videos while using less computational power.
Why it matters?
SpargeAttention2 is important because it allows for significantly faster video generation with diffusion models – achieving a 16.2x speedup – while maintaining the same level of quality as previous methods. This means we can create high-quality videos more quickly and efficiently, which has implications for many applications like content creation and visual effects.
Abstract
Many training-free sparse attention methods are effective for accelerating diffusion models. Recently, several works suggest that making sparse attention trainable can further increase sparsity while preserving generation quality. We study three key questions: (1) when do the two common masking rules, i.e., Top-k and Top-p, fail, and how can we avoid these failures? (2) why can trainable sparse attention reach higher sparsity than training-free methods? (3) what are the limitations of fine-tuning sparse attention using the diffusion loss, and how can we address them? Based on this analysis, we propose SpargeAttention2, a trainable sparse attention method that achieves high sparsity without degrading generation quality. SpargeAttention2 includes (i) a hybrid masking rule that combines Top-k and Top-p for more robust masking at high sparsity, (ii) an efficient trainable sparse attention implementation, and (iii) a distillation-inspired fine-tuning objective to better preserve generation quality during fine-tuning using sparse attention. Experiments on video diffusion models show that SpargeAttention2 reaches 95% attention sparsity and a 16.2x attention speedup while maintaining generation quality, consistently outperforming prior sparse attention methods.