< Explain other AI papers

Improving Vision-language Models with Perception-centric Process Reward Models

Yingqian Min, Kun Zhou, Yifan Li, Yuhuan Wu, Han Peng, Yifan Du, Wayne Xin Zhao, Min Yang, Ji-Rong Wen

2026-04-28

Improving Vision-language Models with Perception-centric Process Reward Models

Summary

This paper introduces a new method called Perceval to improve how well vision-language models (VLMs) reason about images. These models are getting better at complex tasks, but often make mistakes by 'hallucinating' details not actually present in the image.

What's the problem?

Current methods for training these models, using reinforcement learning with rewards based on the final answer, aren't precise enough to pinpoint *where* the model is going wrong in its reasoning process. It's like getting a grade on a test without knowing which specific questions you missed or why. This makes it hard to correct those errors effectively.

What's the solution?

Perceval acts like a detective for these models. It breaks down the model's answer into individual claims about the image and checks each claim against the actual visual evidence. If a claim doesn't match what's in the image, Perceval flags it as an error. This information is then used during training to specifically penalize the model for making those kinds of mistakes, and also to help the model correct itself when generating answers. It can even be used after the model has already given an answer to edit out the incorrect parts and try again.

Why it matters?

This work is important because it provides a more focused way to train and improve VLMs. By focusing on perceptual accuracy – making sure the model's claims are grounded in reality – it leads to significant improvements in performance across different reasoning tasks. It also shows a promising way to improve model performance even *after* training, without needing to retrain the entire model, which is a cost-effective approach.

Abstract

Recent advancements in reinforcement learning with verifiable rewards (RLVR) have significantly improved the complex reasoning ability of vision-language models (VLMs). However, its outcome-level supervision is too coarse to diagnose and correct errors within the reasoning chain. To this end, we propose Perceval, a process reward model (PRM) that enables token-level error grounding, which can extract image-related claims from the response and compare them one by one with the visual evidence in the image, ultimately returning claims that contain perceptual errors. Perceval is trained with perception-intensive supervised training data. We then integrate Perceval into the RL training process to train the policy models. Specifically, compared to traditional GRPO, which applies sequence-level advantages, we apply token-level advantages by targeting penalties on hallucinated spans identified by Perceval, thus enabling fine-grained supervision signals. In addition to augmenting the training process, Perceval can also assist VLMs during the inference stage. Using Perceval, we can truncate the erroneous portions of the model's response, and then either have the model regenerate the response directly or induce the model to reflect on its previous output. This process can be repeated multiple times to achieve test-time scaling. Experiments show significant improvements on benchmarks from various domains across multiple reasoning VLMs trained with RL, highlighting the promise of perception-centric supervision as a general-purpose strategy. For test-time scaling, it also demonstrates consistent performance gains over other strategies, such as major voting. Our code and data will be publicly released at https://github.com/RUCAIBox/Perceval.