OpenVLThinkerV2: A Generalist Multimodal Reasoning Model for Multi-domain Visual Tasks
Wenbo Hu, Xin Chen, Yan Gao-Tian, Yihe Deng, Nanyun Peng, Kai-Wei Chang
2026-04-10
Summary
This paper introduces a new way to train powerful AI models that can understand both images and text, building on a technique called Group Relative Policy Optimization (GRPO). They’ve created a model called OpenVLThinkerV2 that performs very well on a variety of tasks.
What's the problem?
Training these kinds of AI models is tricky because different image-based tasks can have wildly different reward signals, making it hard for the model to learn consistently. Also, it’s difficult to get the model to both accurately 'see' what’s in an image and then reason about it in a complex way – you want it to do both, but balancing those skills is a challenge.
What's the solution?
The researchers developed a new training method called Gaussian GRPO (G^2RPO). Instead of simply scaling rewards, it forces the model’s learning process to follow a standard statistical pattern, making learning more stable and fair across different tasks. They also added two techniques to help balance perception and reasoning: one encourages longer, more detailed answers for complex questions, and another controls how much the model 'explores' different possibilities during learning to prevent it from getting stuck or going off track.
Why it matters?
This work is important because it makes it easier to build strong, open-source AI models that can handle a wide range of visual and language-based tasks. OpenVLThinkerV2, built using this method, outperforms many existing models, even some that aren’t publicly available, meaning this research pushes the field forward and makes advanced AI more accessible.
Abstract
Group Relative Policy Optimization (GRPO) has emerged as the de facto Reinforcement Learning (RL) objective driving recent advancements in Multimodal Large Language Models. However, extending this success to open-source multimodal generalist models remains heavily constrained by two primary challenges: the extreme variance in reward topologies across diverse visual tasks, and the inherent difficulty of balancing fine-grained perception with multi-step reasoning capabilities. To address these issues, we introduce Gaussian GRPO (G^2RPO), a novel RL training objective that replaces standard linear scaling with non-linear distributional matching. By mathematically forcing the advantage distribution of any given task to strictly converge to a standard normal distribution, N(0,1), G^2RPO theoretically ensures inter-task gradient equity, mitigates vulnerabilities to heavy-tail outliers, and offers symmetric update for positive and negative rewards. Leveraging the enhanced training stability provided by G^2RPO, we introduce two task-level shaping mechanisms to seamlessly balance perception and reasoning. First, response length shaping dynamically elicits extended reasoning chains for complex queries while enforce direct outputs to bolster visual grounding. Second, entropy shaping tightly bounds the model's exploration zone, effectively preventing both entropy collapse and entropy explosion. Integrating these methodologies, we present OpenVLThinkerV2, a highly robust, general-purpose multimodal model. Extensive evaluations across 18 diverse benchmarks demonstrate its superior performance over strong open-source and leading proprietary frontier models.