Sparser Block-Sparse Attention via Token Permutation
Xinghao Wang, Pengyu Wang, Dong Zhang, Chenkun Tan, Shaojun Zhou, Zhaoxiang Liu, Shiguo Lian, Fangxu Liu, Kai Song, Xipeng Qiu
2025-10-27
Summary
This paper introduces a new technique called Permuted Block-Sparse Attention (PBS-Attn) to make large language models faster and more efficient when dealing with very long pieces of text.
What's the problem?
Large language models are great, but they struggle with long texts because of something called the 'self-attention' mechanism. This mechanism requires a lot of computing power – specifically, the amount of power needed increases dramatically as the text gets longer. While the attention process often focuses on only *parts* of the text, existing methods for simplifying this process, like 'block-sparse attention', aren't always effective because important connections between words can be missed when the text is divided into blocks. This leads to wasted computation and reduced accuracy.
What's the solution?
The researchers developed PBS-Attn, which cleverly rearranges the order of how the model looks at the text before applying block-sparse attention. By permuting the sequence, they ensure that related words are more likely to be within the same block, allowing the model to ignore irrelevant blocks more effectively. They also created special, optimized code ('permuted-FlashAttention kernels') to make this process even faster. Essentially, they're making the model smarter about *where* to focus its attention, and speeding up the calculations.
Why it matters?
This work is important because it allows large language models to process much longer texts without requiring a huge increase in computing resources. This means we can build more powerful and capable AI systems that can handle complex tasks like summarizing long documents, understanding lengthy conversations, or working with entire books, all while remaining practical and affordable to run.
Abstract
Scaling the context length of large language models (LLMs) offers significant benefits but is computationally expensive. This expense stems primarily from the self-attention mechanism, whose O(N^2) complexity with respect to sequence length presents a major bottleneck for both memory and latency. Fortunately, the attention matrix is often sparse, particularly for long sequences, suggesting an opportunity for optimization. Block-sparse attention has emerged as a promising solution that partitions sequences into blocks and skips computation for a subset of these blocks. However, the effectiveness of this method is highly dependent on the underlying attention patterns, which can lead to sub-optimal block-level sparsity. For instance, important key tokens for queries within a single block may be scattered across numerous other blocks, leading to computational redundancy. In this work, we propose Permuted Block-Sparse Attention (PBS-Attn), a plug-and-play method that leverages the permutation properties of attention to increase block-level sparsity and enhance the computational efficiency of LLM prefilling. We conduct comprehensive experiments on challenging real-world long-context datasets, demonstrating that PBS-Attn consistently outperforms existing block-sparse attention methods in model accuracy and closely matches the full attention baseline. Powered by our custom permuted-FlashAttention kernels, PBS-Attn achieves an end-to-end speedup of up to 2.75times in long-context prefilling, confirming its practical viability. Code available at https://github.com/xinghaow99/pbs-attn