THOR: Tool-Integrated Hierarchical Optimization via RL for Mathematical Reasoning
Qikai Chang, Zhenrong Zhang, Pengfei Hu, Jiefeng Ma, Yicheng Pan, Jianshu Zhang, Jun Du, Quan Liu, Jianqing Gao
2025-09-18
Summary
This paper introduces THOR, a new system designed to help large language models (LLMs) get better at complex math and coding problems by letting them use external tools like calculators or code interpreters.
What's the problem?
While LLMs are getting good at understanding language, they still struggle with tasks requiring precise calculations or manipulating symbols like in algebra. Existing methods for letting LLMs use tools have issues with creating good training data, fine-tuning the LLM to effectively use those tools, and making sure the LLM doesn't make mistakes when actually solving a problem.
What's the solution?
The researchers developed THOR, which tackles these problems in three ways. First, they created a system called TIRGen that automatically generates high-quality examples of how to use tools to solve problems. Second, they use a special type of learning called reinforcement learning to train the LLM to both choose the right tools *and* generate the correct code to use with those tools. They realized that if a tool call works correctly, it's a good sign the final answer will be right too. Finally, THOR includes a 'self-correction' step where the LLM checks the results from the tools and revises its approach if something goes wrong during the problem-solving process.
Why it matters?
This work is important because it significantly improves the ability of LLMs to handle complex mathematical and coding tasks. It allows models of a certain size to achieve top-level performance on these benchmarks, and it provides a framework for making LLMs more reliable and accurate when they need to use external tools to solve problems.
Abstract
Large Language Models (LLMs) have made remarkable progress in mathematical reasoning, but still continue to struggle with high-precision tasks like numerical computation and formal symbolic manipulation. Integrating external tools has emerged as a promising approach to bridge this gap. Despite recent advances, existing methods struggle with three key challenges: constructing tool-integrated reasoning data, performing fine-grained optimization, and enhancing inference. To overcome these limitations, we propose THOR (Tool-Integrated Hierarchical Optimization via RL). First, we introduce TIRGen, a multi-agent actor-critic-based pipeline for constructing high-quality datasets of tool-integrated reasoning paths, aligning with the policy and generalizing well across diverse models. Second, to perform fine-grained hierarchical optimization, we introduce an RL strategy that jointly optimizes for both trajectory-level problem solving and step-level code generation. This is motivated by our key insight that the success of an intermediate tool call is a strong predictor of the final answer's correctness. Finally, THOR incorporates a self-correction mechanism that leverages immediate tool feedback to dynamically revise erroneous reasoning paths during inference. Our approach demonstrates strong generalization across diverse models, performing effectively in both reasoning and non-reasoning models. It further achieves state-of-the-art performance for models of a similar scale on multiple mathematical benchmarks, while also delivering consistent improvements on code benchmarks. Our code will be publicly available at https://github.com/JingMog/THOR.