Process Reinforcement through Implicit Rewards
Ganqu Cui, Lifan Yuan, Zefan Wang, Hanbin Wang, Wendi Li, Bingxiang He, Yuchen Fan, Tianyu Yu, Qixin Xu, Weize Chen, Jiarui Yuan, Huayu Chen, Kaiyan Zhang, Xingtai Lv, Shuo Wang, Yuan Yao, Xu Han, Hao Peng, Yu Cheng, Zhiyuan Liu, Maosong Sun, Bowen Zhou
2025-02-04
Summary
This paper talks about a new method called PRIME, which helps AI models learn to solve complicated problems, like math and coding, more effectively. It focuses on improving how these models are rewarded during training by looking at the steps they take to solve a problem, not just whether they get the final answer right.
What's the problem?
When teaching AI to solve multi-step problems, the usual approach is to reward it only for getting the correct final answer. This makes it difficult for the AI to learn from its mistakes or figure out which steps in its process were helpful. A better idea is to reward the AI for each step it takes (called dense rewards), but this is hard to do because it requires a lot of expensive and detailed feedback. It also risks the AI finding shortcuts or 'hacks' to get rewards without actually solving the problem correctly.
What's the solution?
The researchers created PRIME, a method that uses implicit rewards to train AI without needing detailed feedback for every step. Instead of manually scoring each step, PRIME looks at the final result and uses that to figure out which steps were likely good or bad. This makes training faster and less expensive while still helping the AI learn how to solve problems step by step. PRIME was tested on tasks like math and coding, and it showed big improvements compared to older methods, even with much less training data.
Why it matters?
This research is important because it helps AI become better at solving complex problems that require logical thinking and multiple steps. By making training more efficient and effective, PRIME could lead to smarter AI systems that can handle tasks like advanced math, programming, or scientific research. This could make AI tools more useful in education, technology, and many other fields where problem-solving is essential.
Abstract
Dense process rewards have proven a more effective alternative to the sparse outcome-level rewards in the inference-time scaling of large language models (LLMs), particularly in tasks requiring complex multi-step reasoning. While dense rewards also offer an appealing choice for the reinforcement learning (RL) of LLMs since their fine-grained rewards have the potential to address some inherent issues of outcome rewards, such as training efficiency and credit assignment, this potential remains largely unrealized. This can be primarily attributed to the challenges of training process reward models (PRMs) online, where collecting high-quality process labels is prohibitively expensive, making them particularly vulnerable to reward hacking. To address these challenges, we propose PRIME (Process Reinforcement through IMplicit rEwards), which enables online PRM updates using only policy rollouts and outcome labels through implict process rewards. PRIME combines well with various advantage functions and forgoes the dedicated reward model training phrase that existing approaches require, substantially reducing the development overhead. We demonstrate PRIME's effectiveness on competitional math and coding. Starting from Qwen2.5-Math-7B-Base, PRIME achieves a 15.1% average improvement across several key reasoning benchmarks over the SFT model. Notably, our resulting model, Eurus-2-7B-PRIME, surpasses Qwen2.5-Math-7B-Instruct on seven reasoning benchmarks with 10% of its training data.