Native Hybrid Attention for Efficient Sequence Modeling
Jusen Du, Jiaxi Hu, Tao Zhang, Weigao Sun, Yu Cheng
2025-10-09
Summary
This paper introduces a new way to build transformer models, called Native Hybrid Attention (NHA), that tries to get the best of both worlds: the power of traditional transformers and the speed of more efficient, but usually less accurate, attention mechanisms.
What's the problem?
Transformers are really good at understanding sequences of information, like sentences, but they become very slow and require a lot of computing power when dealing with long sequences. Simpler, faster attention methods exist, but they often forget important details from earlier in the sequence, leading to less accurate results, especially when needing to remember things over long distances.
What's the solution?
The researchers created NHA, which combines both full and linear attention in a clever way. It uses a linear attention method to quickly update a memory of the long-term context, and then adds in information from a small, recent window of tokens. Finally, it uses a standard attention calculation to weigh all this information together. Importantly, the amount of 'recent' information considered can be easily adjusted, allowing a smooth transition between fully linear and fully traditional attention, and all layers of the model can be built the same way.
Why it matters?
This work is important because it offers a way to build transformer models that are both accurate and efficient. The experiments show NHA performs better than existing methods on tasks requiring good memory and reasoning, and it can even be used to speed up existing large language models without sacrificing too much accuracy. This could lead to faster and more accessible AI applications.
Abstract
Transformers excel at sequence modeling but face quadratic complexity, while linear attention offers improved efficiency but often compromises recall accuracy over long contexts. In this work, we introduce Native Hybrid Attention (NHA), a novel hybrid architecture of linear and full attention that integrates both intra \& inter-layer hybridization into a unified layer design. NHA maintains long-term context in key-value slots updated by a linear RNN, and augments them with short-term tokens from a sliding window. A single softmax attention operation is then applied over all keys and values, enabling per-token and per-head context-dependent weighting without requiring additional fusion parameters. The inter-layer behavior is controlled through a single hyperparameter, the sliding window size, which allows smooth adjustment between purely linear and full attention while keeping all layers structurally uniform. Experimental results show that NHA surpasses Transformers and other hybrid baselines on recall-intensive and commonsense reasoning tasks. Furthermore, pretrained LLMs can be structurally hybridized with NHA, achieving competitive accuracy while delivering significant efficiency gains. Code is available at https://github.com/JusenD/NHA.