< Explain other AI papers

Self-Rewarding Vision-Language Model via Reasoning Decomposition

Zongxia Li, Wenhao Yu, Chengsong Huang, Rui Liu, Zhenwen Liang, Fuxiao Liu, Jingxi Che, Dian Yu, Jordan Boyd-Graber, Haitao Mi, Dong Yu

2025-08-28

Self-Rewarding Vision-Language Model via Reasoning Decomposition

Summary

This paper addresses a common problem with Vision-Language Models (VLMs) – they often make up details not actually present in an image (hallucinations) or simply rely on what they already 'know' from text instead of actually 'looking' at the image. The researchers introduce a new method called Vision-SR1 to help these models better understand and reason about what they see.

What's the problem?

VLMs are trained to answer questions about images, but they frequently get things wrong by either inventing information or ignoring the image altogether. This happens because the usual training methods only check if the final answer is correct, without specifically guiding the model to properly process the visual information step-by-step. Existing attempts to fix this often involve expensive human labeling or using information from other models, which can introduce new problems and don't always adapt well to the model's learning process.

What's the solution?

Vision-SR1 uses a technique called reinforcement learning to train the VLM to focus on the visual aspects of the task. It works in two stages: first, the model describes what it 'sees' in the image in a way that should be enough to answer the question. Then, the model tries to answer the question *only* using its own description, without looking back at the original image. If it succeeds, it gets a 'reward,' encouraging it to create better, more accurate visual descriptions. This reward is combined with the usual check of the final answer, creating a balanced training approach.

Why it matters?

This research is important because it offers a way to improve VLMs without needing lots of human effort or relying on potentially flawed information from other sources. By teaching the model to better understand and describe images, Vision-SR1 helps reduce errors and makes these models more reliable for tasks that require visual reasoning, like understanding scenes or answering questions about images.

Abstract

Vision-Language Models (VLMs) often suffer from visual hallucinations, saying things that are not actually in the image, and language shortcuts, where they skip the visual part and just rely on text priors. These issues arise because most post-training methods for VLMs rely on simple verifiable answer matching and supervise only final outputs, leaving intermediate visual reasoning without explicit guidance. As a result, VLMs receive sparse visual signals and often learn to prioritize language-based reasoning over visual perception. To mitigate this, some existing methods add visual supervision using human annotations or distilled labels from external large models. However, human annotations are labor-intensive and costly, and because external signals cannot adapt to the evolving policy, they cause distributional shifts that can lead to reward hacking. In this paper, we introduce Vision-SR1, a self-rewarding method that improves visual reasoning without relying on external visual supervisions via reinforcement learning. Vision-SR1 decomposes VLM reasoning into two stages: visual perception and language reasoning. The model is first prompted to produce self-contained visual perceptions that are sufficient to answer the question without referring back the input image. To validate this self-containment, the same VLM model is then re-prompted to perform language reasoning using only the generated perception as input to compute reward. This self-reward is combined with supervision on final outputs, providing a balanced training signal that strengthens both visual perception and language reasoning. Our experiments demonstrate that Vision-SR1 improves visual reasoning, mitigates visual hallucinations, and reduces reliance on language shortcuts across diverse vision-language tasks.