MUR: Momentum Uncertainty guided Reasoning for Large Language Models
Hang Yan, Fangzhi Xu, Rongman Xu, Yifei Li, Jian Zhang, Haoran Luo, Xiaobao Wu, Luu Anh Tuan, Haiteng Zhao, Qika Lin, Jun Liu
2025-07-25
Summary
This paper talks about MUR, a method that helps large language models think more efficiently and accurately by smartly deciding when to spend more time figuring out tough parts of a problem.
What's the problem?
Large language models can waste time overthinking simple parts and don't always use their computational power wisely, which makes them slower and less efficient.
What's the solution?
The researchers developed MUR, inspired by the idea of momentum in physics, which tracks the model's uncertainty during each reasoning step and dynamically allocates more thinking resources to difficult parts, skipping unnecessary effort on easier parts.
Why it matters?
This matters because MUR reduces the amount of computing needed by over half while improving accuracy, making AI systems faster and better at solving complex tasks without needing extra training.
Abstract
Momentum Uncertainty-guided Reasoning (MUR) dynamically allocates computational resources to improve reasoning efficiency and accuracy in Large Language Models without additional training.