< Explain other AI papers

CAT: Causal Attention Tuning For Injecting Fine-grained Causal Knowledge into Large Language Models

Kairong Han, Wenshuo Zhao, Ziyu Zhao, JunJian Ye, Lujia Pan, Kun Kuang

2025-09-15

CAT: Causal Attention Tuning For Injecting Fine-grained Causal Knowledge into Large Language Models

Summary

This paper investigates whether powerful Large Language Models (LLMs) truly *understand* cause and effect, or if they just recognize patterns in the data they're trained on.

What's the problem?

LLMs are really good at predicting what comes next in text, but they often learn the wrong things. They pick up on coincidences and correlations that aren't actually causal relationships. This means they don't perform well when faced with situations that are different from what they've seen before – what researchers call 'out-of-distribution' scenarios. Essentially, they can be fooled because they don't understand *why* things happen, only *that* they happen together.

What's the solution?

The researchers developed a method called Causal Attention Tuning (CAT) to help LLMs focus on actual cause-and-effect relationships. They created a system that automatically identifies which parts of the text are causally important, using human knowledge as a starting point. Then, they modified the LLM’s ‘attention mechanism’ – the part that decides what to focus on – to prioritize these causally relevant parts of the text. This ‘Re-Attention’ process helps the model ignore misleading patterns and concentrate on the true drivers of events.

Why it matters?

This work is important because it addresses a fundamental limitation of LLMs. If we want these models to be reliable and trustworthy, especially in real-world applications like science or medicine, they need to be able to reason about cause and effect. CAT offers a way to improve their reasoning abilities and make them more robust when dealing with unfamiliar situations, moving beyond simply memorizing patterns to actually understanding the world.

Abstract

Large Language Models (LLMs) have achieved remarkable success across various domains. However, a fundamental question remains: Can LLMs effectively utilize causal knowledge for prediction and generation? Through empirical studies, we find that LLMs trained directly on large-scale data often capture spurious correlations rather than true causal relationships, leading to suboptimal performance, especially in out-of-distribution (OOD) scenarios. To address this challenge, we propose Causal Attention Tuning (CAT), a novel approach that injects fine-grained causal knowledge into the attention mechanism. We propose an automated pipeline that leverages human priors to automatically generate token-level causal signals and introduce the Re-Attention mechanism to guide training, helping the model focus on causal structures while mitigating noise and biases in attention scores. Experimental results on our proposed Spurious Token Game (STG) benchmark and multiple downstream tasks demonstrate that our approach effectively leverages causal knowledge for prediction and remains robust in OOD scenarios. Implementation details can be found at https://github.com/Kairong-Han/CAT.