SeeNav-Agent: Enhancing Vision-Language Navigation with Visual Prompt and Step-Level Policy Optimization
Zhengcheng Wang, Zichuan Lin, Yijun Yang, Haobo Fu, Deheng Ye
2025-12-05
Summary
This paper introduces a new way to help computer agents navigate through virtual environments using both visual information and natural language instructions, aiming to make them much better at following directions.
What's the problem?
Current agents that use large language models to understand instructions and images often make mistakes during navigation. These errors fall into three main categories: misinterpreting what they *see* in the environment, struggling with the *logic* of the instructions, and failing to create a good *plan* to reach the goal. Essentially, they get confused about where they are, what to do, and how to get there.
What's the solution?
The researchers developed a system called SeeNav-Agent that tackles these problems in two key ways. First, they improved how the agent 'sees' by giving it two different views of the same scene, which helps it avoid getting confused by visual illusions. Second, they created a new training method called SRGPO that rewards the agent at each step of the navigation process, helping it learn a better strategy for planning its route. This training method groups steps together to efficiently estimate how good each action is.
Why it matters?
This work is important because it significantly improves the success rate of these navigation agents. The new system, using GPT-4.1, achieved a 20% higher success rate compared to previous best models, and even Qwen2.5-VL-3B saw a 5.6% improvement. This means we're getting closer to creating AI agents that can reliably navigate real-world environments based on human instructions, which has applications in robotics, virtual assistants, and accessibility tools.
Abstract
Existing Vision-Language Navigation (VLN) agents based on Large Vision-Language Models (LVLMs) often suffer from perception errors, reasoning errors, and planning errors, which significantly hinder their navigation performance. To address these limitations, a novel VLN agent framework, named SeeNav-Agent, is proposed in this work. First, to reduce perception hallucinations of the visual module of the VLN agent, a dual-view Visual Prompt (VP) technique is introduced in the input space, which can also improve the agent's understanding of current spatial states. Subsequently, a novel step-level Reinforcement Fine-Tuning (RFT) method, Step Reward Group Policy Optimization (SRGPO), is designed for the post-training of VLN agents. In SRGPO, we first define verifiable process rewards for the navigation task, and then perform efficient step-level advantage estimation by randomly grouping different navigation steps. SRGPO provides dense reward signals for the reinforcement learning process of the VLN agent and enhances its planning capability. Experimental results on the EmbodiedBench Navigation benchmark indicate that by introducing the zero-shot VP module, the GPT-4.1 achieves a navigation success rate of 86.7%, surpassing the current best LVLM by approximately 20 percentage points (pp). Through post-training based on SRGPO, the Qwen2.5-VL-3B model reaches a navigation success rate of 72.3%, outperforming the best existing LVLM model by 5.6 pp. Moreover, compared to RFT algorithms such as GRPO and GiGPO, the proposed SRGPO demonstrates significant improvements in training stability, convergence efficiency, and generalization capability.