Vision-R1: Evolving Human-Free Alignment in Large Vision-Language Models via Vision-Guided Reinforcement Learning
Yufei Zhan, Yousong Zhu, Shurong Zheng, Hongyin Zhao, Fan Yang, Ming Tang, Jinqiao Wang
2025-03-25
Summary
This paper is about improving how AI models that understand both images and language are trained, by using a new method that doesn't require human feedback.
What's the problem?
Training these AI models usually requires a lot of human effort to provide feedback on the quality of the AI's responses. This feedback is used to train a reward model, which guides the AI toward better performance.
What's the solution?
The researchers developed a new technique called Vision-R1 that uses the AI's understanding of images to provide feedback instead of relying on humans. This feedback is used to directly reward the AI for good performance, eliminating the need for a separate reward model.
Why it matters?
This work matters because it can make it easier and cheaper to train AI models that understand both images and language, leading to more advanced and capable AI systems.
Abstract
Large Vision-Language Models (LVLMs) typically follow a two-stage training paradigm-pretraining and supervised fine-tuning. Recently, preference optimization, derived from the language domain, has emerged as an effective post-training reinforcement strategy to enhance capabilities of LVLMs. However, constructing high-quality human-annotated preference data and developing robust reward models to mimic these preferences are both costly and challenging. Motivated by this observation, we propose Vision-R1, a novel vision-guided R1-like reinforcement learning algorithm for LVLMs that rewards models with definitive vision feedback. It only leverages curated instruction data, eliminating the need for specialized reward models and handcrafted preference datasets. We incorporate a criterion-driven reward function that further integrates multi-dimensional feedback to evaluate model completions comprehensively based on the vision task logic. Furthermore, we introduce a progressive rule refinement strategy that dynamically adjusts the reward criteria during training, enabling continuous model improvement and mitigating reward hacking. Extensive experiments on both in-distribution and out-of-distribution benchmarks demonstrate that fine-tuning the 7B LVLMs with Vision-R1 achieves consistent performance gains, with even up to 50% improvement and surpassing the state-of-the-art 10x size model.