From Proof to Program: Characterizing Tool-Induced Reasoning Hallucinations in Large Language Models
Farima Fatahi Bayat, Pouya Pezeshkpour, Estevam Hruschka
2025-11-17
Summary
This paper investigates how large language models (LLMs) perform when given access to tools, specifically a code interpreter, to help solve complex problems. It finds that while these models get better at *getting the right answer*, they actually get worse at *showing their work* and explaining their reasoning.
What's the problem?
The core issue is that when LLMs can use tools like a code interpreter, they start to rely on the tool's output *instead* of thinking through the problem themselves. This means they might give a correct answer, but their explanation will be weak or illogical. The researchers call this 'Tool-Induced Myopia' – basically, the tool blinds them to the need for proper reasoning. They focused on math problems where code helps, but isn't the whole solution, to really see this happening.
What's the solution?
To study this, the researchers created a way to carefully evaluate not just if the answer is right, but *how* the model arrived at that answer, comparing tool-using models to those without tools. They found that the more a model used the tool, the worse its reasoning became. To fix this, they developed a new training method that encourages the model to use the tool's output as *evidence* to support its reasoning, rather than a replacement for it. This improved both accuracy and the quality of the explanations.
Why it matters?
This research is important because as we give AI more powerful tools, we need to make sure they don't become 'shortcuts' that hide a lack of understanding. If an AI can't explain *why* it's giving a certain answer, it's hard to trust it, especially in important applications like science, medicine, or finance. This work shows us that simply improving accuracy isn't enough; we also need to maintain and even improve the reasoning abilities of these models.
Abstract
Tool-augmented Language Models (TaLMs) can invoke external tools to solve problems beyond their parametric capacity. However, it remains unclear whether these tool-enabled gains reflect trustworthy reasoning. Focusing on the Code Interpreter tool, we show that even when tools are selected and executed correctly, TaLMs treat tool outputs as substitutes for reasoning, producing solutions that appear correct but lack coherent justification. We term this failure mode Tool-Induced Myopia (TIM), and study it using PYMATH, a benchmark of 1,679 competition-level mathematical problems for which Python code is helpful but not sufficient. We further develop a multi-dimensional evaluation suite to quantify reasoning degradation in TaLMs relative to their non-tool counterparts. Our findings reveal that while TaLMs achieve up to a 19.3 percentage point gain in final-answer accuracy, their reasoning behavior consistently deteriorates (e.g., non-tool LLMs win up to 41.5% more often in pairwise comparisons of the reasoning process). This degradation intensifies with tool use; the more frequently a model invokes tools, the less coherent its reasoning becomes. Moreover, tool use shifts errors from arithmetic mistakes toward global reasoning failures (logic, assumption, creativity); with TIM present in ~55% of high-risk cases. Finally, we propose a preference-optimization-based framework that realigns TaLMs to use tools as assistive evidence, improving both final-answer accuracy and reasoning depth under tool use. Codes and data are available at: https://github.com/megagonlabs/TIM.