C^2DLM: Causal Concept-Guided Diffusion Large Language Models
Kairong Han, Nuanqiao Shan, Ziyu Zhao, Zijing Hu, Xinpeng Dong, Junjian Ye, Lujia Pan, Fei Wu, Kun Kuang
2025-12-03
Summary
This paper introduces a new type of language model called C^2DLM, designed to improve reasoning abilities in large language models.
What's the problem?
Current large language models, like those based on predicting the next word (autoregressive models) or using diffusion techniques, aren't very good at complex reasoning. This is because human reasoning relies heavily on understanding cause and effect, something these models often miss. Autoregressive models process information strictly in one direction, and diffusion models don't pay attention to the order of information at all, ignoring how things are causally connected.
What's the solution?
The researchers created C^2DLM, which builds upon existing diffusion language models. They added a way to understand causal relationships by first using another model to map out how concepts relate to each other in terms of cause and effect. Then, C^2DLM uses this 'causal map' to guide its attention mechanism, helping it focus on the important connections between ideas and avoid getting confused by irrelevant details. This allows the model to better understand *why* things happen, not just *what* happens.
Why it matters?
This work is important because it addresses a key weakness in current language models – their lack of strong reasoning skills. By incorporating causal understanding, C^2DLM performs better on reasoning tasks and does so more quickly than previous models, bringing us closer to AI that can truly think and solve problems like humans do.
Abstract
Autoregressive (AR) language models and Diffusion Language Models (DLMs) constitute the two principal paradigms of large language models. However, both paradigms suffer from insufficient reasoning capabilities. Human reasoning inherently relies on causal knowledge and thought, which are reflected in natural language. But in the AR paradigm, language is modeled as next token prediction (a strictly left-to-right, token-by-token order), whereas natural language itself exhibits more flexible causal structures. In the DLM paradigm, the attention mechanism is fully connected, which entirely disregards causal order. To fill this gap, we propose a \textbf{C}ausal \textbf{C}oncept-Guided \textbf{D}iffusion \textbf{L}anguage \textbf{M}odel (C^2DLM). Starting from DLM's fully connected attention, C^2DLM first obtains a concept-level causal graph from the teacher model, and then explicitly guides attention to learn causal relationships between concepts. By focusing on causal relationships and avoiding interference from difficult subgoals involving causal inversion, C^2DLM improves 12\% with about 3.2 times training speedup in the COT-OrderPerturb task, and achieves an average gain of 1.31\% across six downstream reasoning tasks. More details in the repository ~https://github.com/Kairong-Han/C-2-DLM{here}.