< Explain other AI papers

Large Reasoning Models Learn Better Alignment from Flawed Thinking

ShengYun Peng, Eric Smith, Ivan Evtimov, Song Jiang, Pin-Yu Chen, Hongyuan Zhan, Haozhu Wang, Duen Horng Chau, Mahesh Pasupuleti, Jianfeng Chi

2025-10-06

Large Reasoning Models Learn Better Alignment from Flawed Thinking

Summary

This paper addresses a weakness in large language models (LRMs) – even though they can 'think' step-by-step, they can still be easily tricked into unsafe or unhelpful responses if you give them a flawed starting point for their reasoning.

What's the problem?

Large reasoning models work by building up a chain of thought before giving an answer, but they don't automatically check if the initial ideas they're using are actually correct or safe. This means if someone intentionally provides a bad or biased premise, the model will continue reasoning from that flawed foundation, leading to potentially harmful outputs. They're vulnerable to 'jailbreaking' where users manipulate the model into bypassing safety protocols.

What's the solution?

The researchers developed a technique called RECAP, which stands for Robust Safety Alignment via Counter-Aligned Prefilling. It's a way to further train these models *after* their initial training, using a special type of reinforcement learning. RECAP shows the model examples of flawed reasoning paths and teaches it to recognize and avoid them, rerouting to safer and more helpful responses. Importantly, it doesn't require changing the basic way the model learns from human feedback or adding significant extra training time.

Why it matters?

This work is important because it makes large language models significantly more reliable and safe. By teaching them to critically evaluate their own reasoning process and resist flawed premises, RECAP reduces the risk of harmful outputs, makes them less susceptible to manipulation, and improves their overall usefulness without sacrificing their reasoning abilities. It's a step towards building AI systems we can trust to be both intelligent and responsible.

Abstract

Large reasoning models (LRMs) "think" by generating structured chain-of-thought (CoT) before producing a final answer, yet they still lack the ability to reason critically about safety alignment and are easily biased when a flawed premise is injected into their thought process. We propose RECAP (Robust Safety Alignment via Counter-Aligned Prefilling), a principled reinforcement learning (RL) method for post-training that explicitly teaches models to override flawed reasoning trajectories and reroute to safe and helpful responses. RECAP trains on a mixture of synthetically generated counter-aligned CoT prefills and standard prompts, requires no additional training cost or modifications beyond vanilla reinforcement learning from human feedback (RLHF), and substantially improves safety and jailbreak robustness, reduces overrefusal, and preserves core reasoning capability -- all while maintaining inference token budget. Extensive analysis shows that RECAP-trained models engage in self-reflection more frequently and remain robust under adaptive attacks, preserving safety even after repeated attempts to override their reasoning.