CodeV: Code with Images for Faithful Visual Reasoning via Tool-Aware Policy Optimization
Xinhai Hou, Shaoyuan Xu, Manan Biyani, Mayan Li, Jia Liu, Todd C. Hollon, Bryan Wang
2025-12-03
Summary
This paper investigates how well AI models that 'think with images' actually *use* those images to solve problems, finding that they can often get the right answer without actually looking at the relevant parts of the picture or paying attention to the information tools give them.
What's the problem?
Current AI models that combine vision and language, while achieving high accuracy on tasks requiring visual reasoning, often do so in a way that isn't trustworthy. They might succeed in answering a question correctly, but only by guessing, and not by actually understanding the image or using tools effectively. It's hard to tell if the model is truly 'reasoning' with the image or just getting lucky, and existing methods for evaluating this are flawed because they focus on the final answer instead of *how* the model arrived at it.
What's the solution?
The researchers developed a new way to test if a model is truly using visual information correctly, by checking if the parts of the image it focuses on actually contain the information needed to answer the question. They then built a new AI model called CodeV, which is trained to use visual tools more carefully. CodeV treats tools like Python code and gets rewarded for using them in a way that makes sense based on the question and the tool's output, rather than just being rewarded for getting the final answer right. This encourages the model to actually look at the right parts of the image and use the tools effectively.
Why it matters?
This work is important because it highlights the need for AI systems to be not just accurate, but also *trustworthy*. If we can't be sure *why* an AI model made a decision, it's hard to rely on it, especially in important applications. By focusing on 'faithful' visual reasoning – ensuring the model actually uses the visual information it's given – this research moves us closer to building AI systems that are more reliable and understandable.
Abstract
Agentic vision-language models are increasingly trained to "think with images" by calling image operations. However, we show that high final-answer accuracy often hides unfaithful visual reasoning: models may invoke tools on irrelevant regions or ignore tool outputs entirely, yet still guess the correct answer. In this work, we first propose a faithfulness evaluation protocol that measures whether intermediate visual tool outputs (e.g., crops) actually contain the queried evidence. This reveals that recent visual agents achieve high final-answer accuracy but exhibit low rates of faithful tool-use on visual search benchmarks. We then introduce CodeV, a code-based visual agent trained with Tool-Aware Policy Optimization (TAPO). TAPO is a process-level RL framework that augments GRPO with dense rewards defined directly on visual tool inputs and outputs, rather than on chain-of-thought tokens, making supervision easier to verify and less susceptible to reward hacking. CodeV represents visual tools as executable Python code, and TAPO assigns step-wise rewards based solely on the question and tool output, encouraging both necessary and evidence-consistent tool use. In a two-stage SFT+RL pipeline, CodeV achieves competitive or superior accuracy while substantially increasing faithful tool-use rates on related visual search benchmarks. Beyond visual search, CodeV attains strong performance on a range of multimodal reasoning and math benchmarks, suggesting that explicitly supervising intermediate tool behavior is crucial for building trustworthy, agentic visual reasoning systems.