Masks Can Be Distracting: On Context Comprehension in Diffusion Language Models
Julianna Piskorz, Cristina Pinneri, Alvaro Correia, Motasem Alfarra, Risheek Garrepalli, Christos Louizos
2025-12-03
Summary
This paper investigates how well Masked Diffusion Language Models, a newer type of language model, actually understand the context of the text they're processing, comparing them to the more traditional Autoregressive Language Models.
What's the problem?
The researchers found two main issues with current Masked Diffusion Language Models. First, even though they're designed to look at the whole input text, they still focus heavily on information that's close to the word they're trying to predict, ignoring important details further away. Second, the models need a lot of 'masking' – essentially, blank spaces – to work, but these masks actually make it *harder* for the model to understand the relevant information in the text, acting like distractions.
What's the solution?
To fix this, the team created a new way to train the model. This new training method makes the model less sensitive to the number of masks used, so the masks don't interfere with its ability to understand the context. By training the model to ignore the masks, they improved its performance and made it more reliable.
Why it matters?
This research is important because it highlights weaknesses in a promising new type of language model. By identifying these problems and offering a solution, it provides valuable guidance for building better diffusion-based language models that can truly grasp the meaning of text and use all available information effectively.
Abstract
Masked Diffusion Language Models (MDLMs) have recently emerged as a promising alternative to Autoregressive Language Models (ARLMs), leveraging a denoising objective that, in principle, should enable more uniform context utilisation. In this work, we examine the context comprehension abilities of MDLMs and uncover two key limitations. First, despite their more global training objective and bidirectional attention mechanism, similarly to ARLMS, MDLMs exhibit a strong locality bias: performance is highly sensitive to the position of relevant information within the input, favouring local over distant context. Second, we show that appending a large number of mask tokens--required for generation--can significantly degrade context comprehension. Through systematic ablations, we find that these masks act as distractors, reducing the model's ability to process relevant information. To address this, we introduce a mask-agnostic loss function that encourages predictions to remain invariant to the number of appended masks. Fine-tuning with this objective substantially mitigates the distracting effect of masks, improving robustness of MDLMs. Overall, our findings reveal critical limitations of the current MDLM training paradigm and provide actionable insights for building diffusion-based language models with stronger context comprehension.