What does RL improve for Visual Reasoning? A Frankenstein-Style Analysis
Xirui Li, Ming Li, Tianyi Zhou
2026-02-16
Summary
This research investigates what exactly reinforcement learning (RL) improves in vision-language models, which are AI systems that can 'see' images and understand text. While RL is often used to make these models better at visual reasoning, it wasn't clear *how* it was improving them compared to simply giving the model more examples to learn from.
What's the problem?
Previous studies showed RL helps, but they measured overall performance improvements, making it hard to pinpoint *which* skills were getting better. It was like getting a good grade on a test without knowing which specific topics you mastered. Researchers needed a way to break down the improvements and understand what RL was actually changing within the model to cause those gains.
What's the solution?
The researchers used a clever 'Frankenstein' approach to analyze the model. First, they figured out which parts of the model were changing when RL was applied, looking at how the model processed information. Then, they compared the model's internal settings before and after RL to see what was being updated. Finally, they tested if the improvements learned by RL could be transferred to another model by combining parts of the RL-trained model with a model trained in a standard way. They found RL primarily refined how the model processed information in the middle and later stages, not the initial 'seeing' part.
Why it matters?
This work shows that RL doesn't broadly improve a model's ability to understand images, but instead focuses on better connecting what the model 'sees' to its reasoning process. It highlights that simply looking at overall performance gains isn't enough to understand *how* these AI systems are improving, and that more detailed analysis is needed to build even better vision-language models.
Abstract
Reinforcement learning (RL) with verifiable rewards has become a standard post-training stage for boosting visual reasoning in vision-language models, yet it remains unclear what capabilities RL actually improves compared with supervised fine-tuning as cold-start initialization (IN). End-to-end benchmark gains conflate multiple factors, making it difficult to attribute improvements to specific skills. To bridge the gap, we propose a Frankenstein-style analysis framework including: (i) functional localization via causal probing; (ii) update characterization via parameter comparison; and (iii) transferability test via model merging. Instead, RL induces a consistent inference-time shift primarily in mid-to-late layers, and these mid-to-late refinements are both transferable (via merging) and necessary (via freezing) for RL gains. Overall, our results suggest that RL's reliable contribution in visual reasoning is not a uniform enhancement of visual perception, but a systematic refinement of mid-to-late transformer computation that improves vision-to-reasoning alignment and reasoning performance, highlighting the limitations of benchmark-only evaluation for understanding multimodal reasoning improvements.