< Explain other AI papers

Reasoning Introduces New Poisoning Attacks Yet Makes Them More Complicated

Hanna Foerster, Ilia Shumailov, Yiren Zhao, Harsh Chaudhari, Jamie Hayes, Robert Mullins, Yarin Gal

2025-09-12

Reasoning Introduces New Poisoning Attacks Yet Makes Them More Complicated

Summary

This research explores how to secretly manipulate Large Language Models (LLMs) by subtly changing their reasoning process, rather than directly altering their responses.

What's the problem?

Previous attacks on LLMs focused on making them give wrong answers when given a specific trigger phrase. However, newer LLMs that 'think step-by-step' before answering are harder to trick. The problem is that while it's possible to inject hidden instructions into the model's reasoning, getting those instructions to actually change the final answer is surprisingly difficult because the model can often correct itself during its thought process.

What's the solution?

The researchers developed a new type of attack called 'decomposed reasoning poison'. Instead of a single trigger, they split the malicious instruction into multiple, harmless-looking parts and inserted them into the model's reasoning steps. This makes the attack harder to detect because no single part seems suspicious. They then tested how reliably this method could change the model's final answers.

Why it matters?

This work shows that even though LLMs are becoming more sophisticated, they aren't completely immune to manipulation. More importantly, it suggests that the very reasoning abilities that make these models powerful also provide some natural defense against attacks. Understanding these defenses is crucial for building more secure and trustworthy AI systems.

Abstract

Early research into data poisoning attacks against Large Language Models (LLMs) demonstrated the ease with which backdoors could be injected. More recent LLMs add step-by-step reasoning, expanding the attack surface to include the intermediate chain-of-thought (CoT) and its inherent trait of decomposing problems into subproblems. Using these vectors for more stealthy poisoning, we introduce ``decomposed reasoning poison'', in which the attacker modifies only the reasoning path, leaving prompts and final answers clean, and splits the trigger across multiple, individually harmless components. Fascinatingly, while it remains possible to inject these decomposed poisons, reliably activating them to change final answers (rather than just the CoT) is surprisingly difficult. This difficulty arises because the models can often recover from backdoors that are activated within their thought processes. Ultimately, it appears that an emergent form of backdoor robustness is originating from the reasoning capabilities of these advanced LLMs, as well as from the architectural separation between reasoning and final answer generation.