< Explain other AI papers

MR-Align: Meta-Reasoning Informed Factuality Alignment for Large Reasoning Models

Xinming Wang, Jian Xu, Bin Yu, Sheng Lian, Hongzhu Yi, Yi Chen, Yingjian Zhu, Boran Wang, Hongming Yang, Han Hu, Xu-Yao Zhang, Cheng-Lin Liu

2025-11-04

MR-Align: Meta-Reasoning Informed Factuality Alignment for Large Reasoning Models

Summary

This paper investigates why large language models, despite being good at complex thinking, often get simple factual questions wrong, and proposes a new method to improve their accuracy.

What's the problem?

Large reasoning models are surprisingly bad at answering questions that require looking up specific facts, even when they seem to understand the reasoning process needed to find the answer. The issue is that the model can *identify* the correct facts during its thinking, but then doesn't actually *use* those facts when forming its final answer, leading to inaccuracies. It's like knowing the right information but forgetting it by the time you need to write down the answer.

What's the solution?

The researchers developed a technique called MR-ALIGN which focuses on improving the *way* the model thinks, rather than just checking the final answer. It works by analyzing the steps the model takes while reasoning and giving it feedback on those steps. Specifically, it looks at how likely each step in the reasoning process is to lead to a correct answer and then encourages the model to repeat the good steps and avoid the bad ones. This is done by adjusting the importance of different parts of the model's thinking process, making it more likely to follow a path that leads to factual correctness.

Why it matters?

This research shows that improving the reasoning *process* itself is crucial for making large language models more reliable and truthful. Instead of just trying to correct the final answer, focusing on how the model arrives at that answer can significantly boost its performance and reduce the spread of misinformation. This is a step towards building AI systems that are not only intelligent but also trustworthy.

Abstract

Large reasoning models (LRMs) show strong capabilities in complex reasoning, yet their marginal gains on evidence-dependent factual questions are limited. We find this limitation is partially attributable to a reasoning-answer hit gap, where the model identifies the correct facts during reasoning but fails to incorporate them into the final response, thereby reducing factual fidelity. To address this issue, we propose MR-ALIGN, a Meta-Reasoning informed alignment framework that enhances factuality without relying on external verifiers. MR-ALIGN quantifies state transition probabilities along the model's thinking process and constructs a transition-aware implicit reward that reinforces beneficial reasoning patterns while suppressing defective ones at the atomic thinking segments. This re-weighting reshapes token-level signals into probability-aware segment scores, encouraging coherent reasoning trajectories that are more conducive to factual correctness. Empirical evaluations across four factual QA datasets and one long-form factuality benchmark show that MR-ALIGN consistently improves accuracy and truthfulness while reducing misleading reasoning. These results highlight that aligning the reasoning process itself, rather than merely the outputs, is pivotal for advancing factuality in LRMs.