Attention Sinks in Diffusion Language Models
Maximo Eduardo Rulli, Simone Petruzzi, Edoardo Michielon, Fabrizio Silvestri, Simone Scardapane, Alessio Devoto
2025-10-23
Summary
This paper investigates how a newer type of language model, called a Masked Diffusion Language Model, actually *works* internally, specifically focusing on how it pays attention to different parts of a sentence while generating text.
What's the problem?
Traditional language models build sentences word-by-word, focusing on what came *before* to predict the next word. These models sometimes get stuck focusing on just a few words, a phenomenon called 'attention sinking'. While newer 'diffusion' models are more efficient and perform well, it wasn't clear if they had the same problem, and if so, how it differed from the older models.
What's the solution?
The researchers analyzed how these diffusion models distribute their 'attention' while creating text. They found that attention sinks *do* occur, but unlike the older models, the words these sinks focus on change throughout the sentence generation. Surprisingly, even when they removed these attention sinks, the diffusion model still performed almost as well, showing it wasn't as reliant on them.
Why it matters?
Understanding how diffusion language models work is crucial for improving them. This research shows they operate differently than older models, particularly in how they use attention. This difference explains why they are more robust and could lead to new ways to build even better language models in the future.
Abstract
Masked Diffusion Language Models (DLMs) have recently emerged as a promising alternative to traditional Autoregressive Models (ARMs). DLMs employ transformer encoders with bidirectional attention, enabling parallel token generation while maintaining competitive performance. Although their efficiency and effectiveness have been extensively studied, the internal mechanisms that govern DLMs remain largely unexplored. In this work, we conduct an empirical analysis of DLM attention patterns, focusing on the attention sinking phenomenon, an effect previously observed in various transformer-based architectures. Our findings reveal that DLMs also exhibit attention sinks, but with distinct characteristics. First, unlike in ARMs, the sink positions in DLMs tend to shift throughout the generation process, displaying a dynamic behaviour. Second, while ARMs are highly sensitive to the removal of attention sinks, DLMs remain robust: masking sinks leads to only a minor degradation in performance. These results provide new insights into the inner workings of diffusion-based language models and highlight fundamental differences in how they allocate and utilize attention compared to autoregressive models.