Fractional Reasoning via Latent Steering Vectors Improves Inference Time Compute
Sheng Liu, Tianlang Chen, Pan Lu, Haotian Ye, Yizheng Chen, Lei Xing, James Zou
2025-06-30
Summary
This paper talks about Fractional Reasoning, a new way to improve large language models by letting them adjust how deeply they think about each problem during use, without needing extra training.
What's the problem?
Current methods for improving AI answers during reasoning treat all questions the same way, but different questions need different levels of thinking, and using the wrong amount can waste time or lead to worse answers.
What's the solution?
The researchers developed a technique that extracts hidden signals inside the model associated with deeper reasoning and allows the model to scale this thinking up or down easily using a control factor. This way, the AI can tailor its reasoning strength to each problem's difficulty, improving answer accuracy and efficiency.
Why it matters?
This matters because it helps AI systems work smarter, not just harder, by using the right amount of thinking for each question. This leads to better results faster and makes AI more useful for tasks like math problems and complex decision-making.
Abstract
Fractional Reasoning dynamically adjusts reasoning depth during inference to enhance the performance of large language models across various tasks.