CASA: Cross-Attention via Self-Attention for Efficient Vision-Language Fusion
Moritz Böhle, Amélie Royer, Juliette Marrie, Edouard Grave, Patrick Pérez
2025-12-23
Summary
This paper explores how to make vision-language models, which understand both images and text, more efficient without losing accuracy.
What's the problem?
Current vision-language models often struggle with high-resolution images, long conversations, or videos because processing all that information requires a lot of computing power and memory. One way to make them faster is to use a technique called 'cross-attention,' but this usually results in the model being less accurate, especially when it comes to noticing small details in images.
What's the solution?
The researchers discovered that improving cross-attention involves letting the text within the model interact with itself *while* also paying attention to the image. They developed a new method called CASA, which stands for Cross-Attention via Self-Attention, that allows for this local text interaction. CASA makes the model perform much closer to the more accurate, but slower, methods while still being efficient for long and complex inputs like videos.
Why it matters?
This work is important because it allows for the creation of more practical vision-language models that can handle real-world scenarios like understanding videos in real-time or having extended conversations about images, without needing massive amounts of computing resources.
Abstract
Vision-language models (VLMs) are commonly trained by inserting image tokens from a pretrained vision encoder into the textual stream of a language model. This allows text and image information to fully attend to one another within the model, but becomes extremely costly for high-resolution images, long conversations, or streaming videos, both in memory and compute. VLMs leveraging cross-attention are an efficient alternative to token insertion but exhibit a clear performance gap, in particular on tasks involving fine-grained visual details. We find that a key to improving such models is to also enable local text-to-text interaction in the dedicated cross-attention layers. Building on this, we propose CASA, Cross-Attention via Self-Attention, a simple and efficient paradigm which substantially reduces the gap with full token insertion on common image understanding benchmarks, while enjoying the same scalability as cross-attention models when applied to long-context multimodal tasks such as streaming video captioning. For samples and code, please see our project page at https://kyutai.org/casa .