< Explain other AI papers

CPPO: Contrastive Perception for Vision Language Policy Optimization

Ahmad Rezaei, Mohsen Gholami, Saeed Ranjbar Alvar, Kevin Cannons, Mohammad Asiful Hossain, Zhou Weimin, Shunbo Zhou, Yong Zhang, Mohammad Akbari

2026-01-06

CPPO: Contrastive Perception for Vision Language Policy Optimization

Summary

This paper introduces a new method called CPPO to improve how vision-language models, which process both images and text, learn to reason about the world. It uses a technique inspired by reinforcement learning to make these models better at understanding what they 'see' in images.

What's the problem?

Teaching a computer to truly *understand* images and then use that understanding to answer questions or solve problems is hard. Previous methods tried to reward the model when it focused on the important parts of an image, but it was difficult to pinpoint exactly *which* parts were important for reasoning. They often needed extra tools like other AI models or lots of labeled data, or they treated all parts of the image equally when giving rewards, which wasn't ideal.

What's the solution?

CPPO solves this by cleverly looking at how the model's confidence changes when you slightly alter the image. If changing an image causes a big shift in the model's answer, it means the model was paying attention to that part of the image. CPPO then encourages the model to be consistent when the image isn't changed much, and sensitive when key parts of the image *are* changed. This is done by adding a special 'loss' to the learning process that guides the model to focus on the right things.

Why it matters?

This work is important because it makes training these vision-language models more efficient and effective. It doesn't require extra AI models or huge amounts of labeled data, and it outperforms previous methods. This means we can build AI systems that are better at understanding the visual world and reasoning about it, which has applications in areas like robotics, image captioning, and visual question answering.

Abstract

We introduce CPPO, a Contrastive Perception Policy Optimization method for finetuning vision-language models (VLMs). While reinforcement learning (RL) has advanced reasoning in language models, extending it to multimodal reasoning requires improving both the perception and reasoning aspects. Prior works tackle this challenge mainly with explicit perception rewards, but disentangling perception tokens from reasoning tokens is difficult, requiring extra LLMs, ground-truth data, forced separation of perception from reasoning by policy model, or applying rewards indiscriminately to all output tokens. CPPO addresses this problem by detecting perception tokens via entropy shifts in the model outputs under perturbed input images. CPPO then extends the RL objective function with a Contrastive Perception Loss (CPL) that enforces consistency under information-preserving perturbations and sensitivity under information-removing ones. Experiments show that CPPO surpasses previous perception-rewarding methods, while avoiding extra models, making training more efficient and scalable.