< Explain other AI papers

Multi-Token Attention

Olga Golovneva, Tianlu Wang, Jason Weston, Sainbayar Sukhbaatar

2025-04-02

Multi-Token Attention

Summary

This paper is about improving how AI models focus on the important parts of information when reading text.

What's the problem?

Current AI models look at each word separately, which limits how much information they can use to understand what's important.

What's the solution?

The researchers created a new method that allows AI models to look at multiple words at once, giving them more context and helping them focus better.

Why it matters?

This work matters because it can help AI models understand language better, especially in situations where there's a lot of information to sort through.

Abstract

Soft attention is a critical mechanism powering LLMs to locate relevant parts within a given context. However, individual attention weights are determined by the similarity of only a single query and key token vector. This "single token attention" bottlenecks the amount of information used in distinguishing a relevant part from the rest of the context. To address this issue, we propose a new attention method, Multi-Token Attention (MTA), which allows LLMs to condition their attention weights on multiple query and key vectors simultaneously. This is achieved by applying convolution operations over queries, keys and heads, allowing nearby queries and keys to affect each other's attention weights for more precise attention. As a result, our method can locate relevant context using richer, more nuanced information that can exceed a single vector's capacity. Through extensive evaluations, we demonstrate that MTA achieves enhanced performance on a range of popular benchmarks. Notably, it outperforms Transformer baseline models on standard language modeling tasks, and on tasks that require searching for information within long contexts, where our method's ability to leverage richer information proves particularly beneficial.