Puzzle Curriculum GRPO for Vision-Centric Reasoning
Ahmadreza Jeddi, Hakki Can Karaimer, Hue Nguyen, Zhongling Wang, Ke Zhao, Javad Rajabi, Ran Zhang, Raghav Goyal, Babak Taati, Radek Grzeszczuk
2025-12-18
Summary
This paper focuses on improving the reasoning abilities of Vision Language Models (VLMs) – AI systems that can understand both images and text. It introduces a new method called Puzzle Curriculum GRPO (PC-GRPO) to help these models think through problems more effectively.
What's the problem?
Current methods for teaching VLMs to reason, like outcome-supervised GRPO, have a few drawbacks. They often require humans to create labels or use external checkers to verify answers, which is expensive and can be inaccurate. Also, the way these models are rewarded for good reasoning isn't always effective, sometimes giving rewards too infrequently or not recognizing partial progress. Finally, the reasoning steps a model takes don't always logically lead to the final answer.
What's the solution?
The researchers developed PC-GRPO, a way to train VLMs to reason without needing human-created labels or external verification. They created three self-supervised 'puzzle' environments – PatchFit, Rotation, and Jigsaw – that act as training exercises. These puzzles provide feedback to the model as it learns. They also introduced a 'curriculum' that starts with easier puzzles and gradually increases the difficulty, focusing on problems that are neither too simple nor too hard. Importantly, they also monitor how well the model's reasoning steps align with its final answer and encourage consistency.
Why it matters?
This work is important because it offers a practical and scalable way to improve the reasoning skills of VLMs. By removing the need for human labels and focusing on verifiable rewards and consistent reasoning, it makes it easier to build AI systems that can reliably and understandably solve complex visual problems. This could lead to more accurate and trustworthy AI in areas like image understanding and problem-solving.
Abstract
Recent reinforcement learning (RL) approaches like outcome-supervised GRPO have advanced chain-of-thought reasoning in Vision Language Models (VLMs), yet key issues linger: (i) reliance on costly and noisy hand-curated annotations or external verifiers; (ii) flat and sparse reward schemes in GRPO; and (iii) logical inconsistency between a chain's reasoning and its final answer. We present Puzzle Curriculum GRPO (PC-GRPO), a supervision-free recipe for RL with Verifiable Rewards (RLVR) that strengthens visual reasoning in VLMs without annotations or external verifiers. PC-GRPO replaces labels with three self-supervised puzzle environments: PatchFit, Rotation (with binary rewards) and Jigsaw (with graded partial credit mitigating reward sparsity). To counter flat rewards and vanishing group-relative advantages, we introduce a difficulty-aware curriculum that dynamically weights samples and peaks at medium difficulty. We further monitor Reasoning-Answer Consistency (RAC) during post-training: mirroring reports for vanilla GRPO in LLMs, RAC typically rises early then degrades; our curriculum delays this decline, and consistency-enforcing reward schemes further boost RAC. RAC correlates with downstream accuracy. Across diverse benchmarks and on Qwen-7B and Qwen-3B backbones, PC-GRPO improves reasoning quality, training stability, and end-task accuracy, offering a practical path to scalable, verifiable, and interpretable RL post-training for VLMs.