R1-VL: Learning to Reason with Multimodal Large Language Models via Step-wise Group Relative Policy Optimization
Jingyi Zhang, Jiaxing Huang, Huanjin Yao, Shunyu Liu, Xikun Zhang, Shijian Lu, Dacheng Tao
2025-03-18
Summary
This paper introduces a new method called StepGRPO to improve the reasoning abilities of AI models that handle both images and text (Multimodal Large Language Models or MLLMs).
What's the problem?
Current methods for improving MLLMs' reasoning skills often rely on showing the model only successful reasoning paths. This leads the model to simply copy those paths without truly understanding why they are correct or what makes other paths incorrect.
What's the solution?
StepGRPO is a reinforcement learning framework that rewards the MLLM for taking correct steps during the reasoning process. It uses two new rewards: one for including necessary reasoning steps and another for following a logical and consistent reasoning process. This helps the model learn what makes a reasoning path good or bad.
Why it matters?
This work matters because it allows MLLMs to develop better reasoning skills by actively learning from both correct and incorrect reasoning paths, leading to more robust and reliable AI systems.
Abstract
Recent studies generally enhance MLLMs' reasoning capabilities via supervised fine-tuning on high-quality chain-of-thought reasoning data, which often leads models to merely imitate successful reasoning paths without understanding what the wrong reasoning paths are. In this work, we aim to enhance the MLLMs' reasoning ability beyond passively imitating positive reasoning paths. To this end, we design Step-wise Group Relative Policy Optimization (StepGRPO), a new online reinforcement learning framework that enables MLLMs to self-improve reasoning ability via simple, effective and dense step-wise rewarding. Specifically, StepGRPO introduces two novel rule-based reasoning rewards: Step-wise Reasoning Accuracy Reward (StepRAR) and Step-wise Reasoning Validity Reward (StepRVR). StepRAR rewards the reasoning paths that contain necessary intermediate reasoning steps via a soft key-step matching technique, while StepRAR rewards reasoning paths that follow a well-structured and logically consistent reasoning process through a reasoning completeness and logic evaluation strategy. With the proposed StepGRPO, we introduce R1-VL, a series of MLLMs with outstanding capabilities in step-by-step reasoning. Extensive experiments over 8 benchmarks demonstrate the superiority of our methods.