< Explain other AI papers

VisPlay: Self-Evolving Vision-Language Models from Images

Yicheng He, Chengsong Huang, Zongxia Li, Jiaxin Huang, Yonghui Yang

2025-11-20

VisPlay: Self-Evolving Vision-Language Models from Images

Summary

This paper introduces VisPlay, a new way to make Vision-Language Models (VLMs) – which are AI systems that can understand both images and text – better at complex reasoning tasks without needing a lot of human help.

What's the problem?

Currently, improving VLMs relies heavily on humans to provide feedback or create specific rules for the AI to follow, which is expensive and doesn't scale well when you want to improve the AI on many different tasks. Getting good feedback signals, or 'rewards', for the AI to learn from is a major bottleneck.

What's the solution?

VisPlay creates a self-improving loop where the VLM essentially teaches itself. It splits the model into two parts: one that *asks* challenging questions about images, and another that *answers* those questions. These two parts learn together, with the question-asker trying to create difficult but answerable questions, and the answerer trying to provide good responses. A special training method called Group Relative Policy Optimization helps balance making questions hard enough to be useful, but not so hard that the answerer can't learn. This process uses a huge amount of unlabeled image data, meaning no humans need to label the images beforehand.

Why it matters?

This research is important because it shows a way to build more intelligent VLMs that can improve their reasoning skills automatically. By removing the need for extensive human labeling, VisPlay offers a scalable path towards creating AI systems that can better understand and interact with the visual world, and it improves performance on existing visual reasoning tests.

Abstract

Reinforcement learning (RL) provides a principled framework for improving Vision-Language Models (VLMs) on complex reasoning tasks. However, existing RL approaches often rely on human-annotated labels or task-specific heuristics to define verifiable rewards, both of which are costly and difficult to scale. We introduce VisPlay, a self-evolving RL framework that enables VLMs to autonomously improve their reasoning abilities using large amounts of unlabeled image data. Starting from a single base VLM, VisPlay assigns the model into two interacting roles: an Image-Conditioned Questioner that formulates challenging yet answerable visual questions, and a Multimodal Reasoner that generates silver responses. These roles are jointly trained with Group Relative Policy Optimization (GRPO), which incorporates diversity and difficulty rewards to balance the complexity of generated questions with the quality of the silver answers. VisPlay scales efficiently across two model families. When trained on Qwen2.5-VL and MiMo-VL, VisPlay achieves consistent improvements in visual reasoning, compositional generalization, and hallucination reduction across eight benchmarks, including MM-Vet and MMMU, demonstrating a scalable path toward self-evolving multimodal intelligence. The project page is available at https://bruno686.github.io/VisPlay/