TraPO: A Semi-Supervised Reinforcement Learning Framework for Boosting LLM Reasoning
Shenzhi Yang, Guangcheng Zhu, Xing Zheng, Yingfan MA, Zhongqi Chen, Bowen Song, Weiqiang Wang, Junbo Zhao, Gang Chen, Haobo Wang
2025-12-17
Summary
This paper focuses on improving how we teach large reasoning models, like those used for solving math problems, using a technique called Reinforcement Learning with Verifiable Rewards (RLVR). The goal is to make these models better at reasoning without needing a huge amount of manually checked example problems.
What's the problem?
Training these reasoning models with RLVR usually requires a lot of people to verify if the model's answers are correct, which is expensive and time-consuming. Some recent attempts to avoid this by only using the model's own internal checks for consistency have run into a problem where the model starts reinforcing its own mistakes, leading to a decline in performance.
What's the solution?
The researchers developed a new approach called TraPO that combines a small amount of verified example problems with a much larger set of problems where the model learns from its own consistency. TraPO cleverly identifies which unlabeled problems the model is solving in a similar way to the labeled examples, and focuses on those to guide the learning process. This helps prevent the model from getting stuck reinforcing incorrect reasoning patterns.
Why it matters?
This work is important because it significantly reduces the need for expensive human verification when training these powerful reasoning models. TraPO achieves impressive results, even surpassing fully supervised models while using only a fraction of the labeled data, making it more practical to build and improve these AI systems for complex tasks like solving math and science problems.
Abstract
Reinforcement learning with verifiable rewards (RLVR) has proven effective in training large reasoning models (LRMs) by leveraging answer-verifiable signals to guide policy optimization, which, however, suffers from high annotation costs. To alleviate this problem, recent work has explored unsupervised RLVR methods that derive rewards solely from the model's internal consistency, such as through entropy and majority voting. While seemingly promising, these methods often suffer from model collapse in the later stages of training, which may arise from the reinforcement of incorrect reasoning patterns in the absence of external supervision. In this work, we investigate a novel semi-supervised RLVR paradigm that utilizes a small labeled set to guide RLVR training on unlabeled samples. Our key insight is that supervised rewards are essential for stabilizing consistency-based training on unlabeled samples, ensuring that only reasoning patterns verified on labeled instances are incorporated into RL training. Technically, we propose an effective policy optimization algorithm, TraPO, that identifies reliable unlabeled samples by matching their learning trajectory similarity to labeled ones. Building on this, TraPO achieves remarkable data efficiency and strong generalization on six widely used mathematical reasoning benchmarks (AIME24/25, AMC, MATH-500, Minerva, and Olympiad) and three out-of-distribution tasks (ARC-c, GPQA-diamond, and MMLU-pro). With only 1K labeled and 3K unlabeled samples, TraPO reaches 42.6% average accuracy, surpassing the best unsupervised method trained on 45K unlabeled samples (38.3%). Notably, when using 4K labeled and 12K unlabeled samples, TraPO even outperforms the fully supervised model trained on the full 45K labeled samples on all benchmarks, while using only 10% of the labeled data. The code is available via https://github.com/ShenzhiYang2000/TRAPO.