SeeingEye: Agentic Information Flow Unlocks Multimodal Reasoning In Text-only LLMs
Weijia Zhang, Zijia Liu, Haoru Li, Haoqi Chen, Jiaxuan You
2025-10-30
Summary
This paper introduces a new way to give powerful text-based AI models the ability to 'see' and understand images, allowing them to answer questions about visuals effectively.
What's the problem?
Current AI models that are good at understanding text struggle when you ask them to work with both text *and* images. Existing attempts to fix this usually rely on simple descriptions of images, which aren't detailed enough and don't work well across different types of visual question-answering tasks. They don't have a good way to efficiently share important visual details with the AI.
What's the solution?
The researchers created a system called Seeing Eye. It works by separating the 'seeing' part from the 'thinking' part. A smaller AI, the 'translator,' acts like an agent that looks at the image, uses tools like text recognition and cropping to focus on important parts, and then creates a structured summary of what it sees. This summary is then given to a larger, text-based AI, the 'reasoner,' which uses its strong language skills to answer the question. The translator and reasoner talk back and forth, asking for more details if needed, to get the right answer.
Why it matters?
Seeing Eye is important because it shows that you don't need a huge, complex AI to understand images. By breaking down the problem into smaller parts and letting specialized AIs handle each part, they were able to achieve better results than much larger AI models, while also being more efficient. This approach makes it easier to add visual understanding to existing, powerful text-based AIs.
Abstract
Recent advances in text-only large language models (LLMs), such as DeepSeek-R1, demonstrate remarkable reasoning ability. However, these models remain fragile or entirely incapable when extended to multi-modal tasks. Existing approaches largely rely on single-form captions, which lack diversity and often fail to adapt across different types of Visual Question Answering (VQA) benchmarks. As a result, they provide no principled or efficient channel for transmitting fine-grained visual information. We introduce Seeing Eye, a modular framework that unlocks multimodal reasoning in text-only LLMs through an agent-based small VLM translator. This translator acts as a perception agent: it can invoke specialized tools (e.g., OCR and crop) and iteratively distill multimodal inputs into structured intermediate representations (SIRs) tailored to the question. These SIRs are then passed to the text-only LLM, which serves as a reasoning agent. Crucially, the translator and reasoner engage in multi-round feedback and interaction, enabling the extraction of targeted visual details and yielding more confident answers. Experiments on knowledge-intensive VQA benchmarks, including MMMU and MIA-Bench, demonstrate that Seeing Eye not only reduces inference cost but also surpasses much larger end-to-end VLMs. For example, an instantiation combining a 3B-parameter vision translator with an 8B-parameter language reasoner outperforms a monolithic 32B VLM on challenging knowledge-based questions. Our results highlight that decoupling perception from reasoning via agent information flow offers a scalable and plug-and-play pathway to multimodal reasoning, allowing strong text-only LLMs to fully leverage their reasoning capabilities. Code is available at: https://github.com/ulab-uiuc/SeeingEye