< Explain other AI papers

Making Mathematical Reasoning Adaptive

Zhejian Lai, Xiang Geng, Zhijun Wang, Yang Bai, Jiahuan Li, Rongxiang Weng, Jingang Wang, Xuezhi Cao, Xunliang Cai, Shujian Huang

2025-10-14

Making Mathematical Reasoning Adaptive

Summary

This paper focuses on improving the mathematical reasoning abilities of large language models (LLMs), which are often seen as a key measure of their intelligence.

What's the problem?

Current LLMs, while powerful, often make mistakes when solving math problems not because they lack knowledge, but because they rely on shortcuts and superficial patterns instead of actual problem-solving logic. This means they struggle when faced with slightly different problems, even if the underlying math is the same – they lack robustness and the ability to generalize.

What's the solution?

The researchers developed a framework called AdaR, which stands for Adaptive Reasoning. AdaR works by creating variations of math problems by changing the specific numbers involved, but keeping the core logic the same. The model is then trained using a method that rewards it for using consistent logic across these variations and penalizes it for relying on those superficial patterns. To make sure the training data is accurate, they use code to solve the problems and double-check the answers. Essentially, AdaR forces the model to *think* through the problem instead of just recognizing a pattern.

Why it matters?

This work is important because it addresses a fundamental weakness in LLMs – their tendency to ‘cheat’ at reasoning. By improving their ability to reason adaptively, AdaR makes these models more reliable and capable of tackling complex problems, not just memorizing solutions. This is a step towards building AI that can truly understand and solve problems like humans do, and it does so efficiently with relatively little training data.

Abstract

Mathematical reasoning is a primary indicator of large language models (LLMs) intelligence. However, existing LLMs exhibit failures of robustness and generalization. This paper attributes these deficiencies to spurious reasoning, i.e., producing answers from superficial features. To address this challenge, we propose the AdaR framework to enable adaptive reasoning, wherein models rely on problem-solving logic to produce answers. AdaR synthesizes logically equivalent queries by varying variable values, and trains models with RLVR on these data to penalize spurious logic while encouraging adaptive logic. To improve data quality, we extract the problem-solving logic from the original query and generate the corresponding answer by code execution, then apply a sanity check. Experimental results demonstrate that AdaR improves robustness and generalization, achieving substantial improvement in mathematical reasoning while maintaining high data efficiency. Analysis indicates that data synthesis and RLVR function in a coordinated manner to enable adaptive reasoning in LLMs. Subsequent analyses derive key design insights into the effect of critical factors and the applicability to instruct LLMs. Our project is available at https://github.com/LaiZhejian/AdaR