< Explain other AI papers

MixReasoning: Switching Modes to Think

Haiquan Lu, Gongfan Fang, Xinyin Ma, Qi Li, Xinchao Wang

2025-10-08

MixReasoning: Switching Modes to Think

Summary

This paper introduces a new approach called MixReasoning that makes reasoning models, like those used for solving math problems, more efficient by not spending time on easy parts of the problem.

What's the problem?

Current reasoning models try to solve problems by breaking them down into many steps and thinking through each one in detail. However, not all steps *need* that much thought. Many are simple and don't require a lot of reasoning, making the whole process unnecessarily long and slow. It's like writing out every single step of adding 2+2, when you already know the answer.

What's the solution?

MixReasoning solves this by dynamically adjusting how much reasoning is applied to each step. It identifies the difficult, important steps and focuses detailed thinking on those, while quickly skipping over the easier ones with simple calculations or inferences. This creates a 'mixed' chain of thought – some parts detailed, some parts concise.

Why it matters?

This is important because it makes these reasoning models much more efficient without sacrificing accuracy. By shortening the reasoning process, it allows for faster problem-solving and potentially reduces the computational resources needed, making these powerful models more practical for real-world applications like complex math or science problems.

Abstract

Reasoning models enhance performance by tackling problems in a step-by-step manner, decomposing them into sub-problems and exploring long chains of thought before producing an answer. However, applying extended reasoning to every step introduces substantial redundancy, as sub-problems vary widely in difficulty and complexity: a small number of pivotal steps are genuinely challenging and decisive for the final answer, while many others only involve straightforward revisions or simple computations. Therefore, a natural idea is to endow reasoning models with the ability to adaptively respond to this variation, rather than treating all steps with the same level of elaboration. To this end, we propose MixReasoning, a framework that dynamically adjusts the depth of reasoning within a single response. The resulting chain of thought then becomes a mixture of detailed reasoning on difficult steps and concise inference on simpler ones. Experiments on GSM8K, MATH-500, and AIME show that MixReasoning shortens reasoning length and substantially improves efficiency without compromising accuracy.