UniDoc-RL: Coarse-to-Fine Visual RAG with Hierarchical Actions and Dense Rewards
Jun Wang, Shuo Tan, Zelong Sun, Tiancheng Gu, Yongle Zhao, Ziyong Feng, Kaicheng Yang, Cewu Lu
2026-04-17
Summary
This paper introduces a new system called UniDoc-RL that helps computer programs understand images better when answering questions. It builds on existing 'Large Vision-Language Models' which are already pretty good at processing both images and text, but makes them even smarter by letting them actively search for and focus on the most important parts of an image.
What's the problem?
Current systems that combine looking up information (like images) with generating answers often aren't very good at understanding the *details* within those images. They use general methods to find images, but don't really zoom in on the specific parts of the image that are most relevant to the question being asked. This makes it hard for them to handle complex questions that require careful visual reasoning.
What's the solution?
UniDoc-RL works like training an agent to play a game. The 'agent' is the computer program, and the 'game' is finding the right visual information to answer a question. It learns to first find relevant documents (containing images), then pick out the most important images within those documents, and finally, even zoom in on specific regions *within* those images. It uses a 'reward' system to learn what actions lead to better answers, and it does all of this without needing a separate system to judge how good its actions are. They also created a new dataset to help train and test this system.
Why it matters?
This research is important because it significantly improves the ability of AI to understand and reason about images. By allowing the AI to actively search for and focus on relevant visual details, it can answer more complex questions and perform tasks that require a deeper understanding of visual information. The improvements are substantial, showing a significant boost in performance compared to previous methods.
Abstract
Retrieval-Augmented Generation (RAG) extends Large Vision-Language Models (LVLMs) with external visual knowledge. However, existing visual RAG systems typically rely on generic retrieval signals that overlook the fine-grained visual semantics essential for complex reasoning. To address this limitation, we propose UniDoc-RL, a unified reinforcement learning framework in which an LVLM agent jointly performs retrieval, reranking, active visual perception, and reasoning. UniDoc-RL formulates visual information acquisition as a sequential decision-making problem with a hierarchical action space. Specifically, it progressively refines visual evidence from coarse-grained document retrieval to fine-grained image selection and active region cropping, allowing the model to suppress irrelevant content and attend to information-dense regions. For effective end-to-end training, we introduce a dense multi-reward scheme that provides task-aware supervision for each action. Based on Group Relative Policy Optimization (GRPO), UniDoc-RL aligns agent behavior with multiple objectives without relying on a separate value network. To support this training paradigm, we curate a comprehensive dataset of high-quality reasoning trajectories with fine-grained action annotations. Experiments on three benchmarks demonstrate that UniDoc-RL consistently surpasses state-of-the-art baselines, yielding up to 17.7% gains over prior RL-based methods.