< Explain other AI papers

Free Process Rewards without Process Labels

Lifan Yuan, Wendi Li, Huayu Chen, Ganqu Cui, Ning Ding, Kaiyan Zhang, Bowen Zhou, Zhiyuan Liu, Hao Peng

2024-12-04

Free Process Rewards without Process Labels

Summary

This paper discusses a new approach to training process reward models (PRMs) that can evaluate reasoning steps in AI systems without needing detailed labels for every step, making it easier to improve their performance.

What's the problem?

Training a process reward model, which gives feedback on each step of reasoning, requires a lot of labeled data indicating the correctness of every intermediate step. Collecting this data is difficult and time-consuming, which limits the development of effective PRMs. As a result, many AI systems rely on outcome reward models (ORMs) that only evaluate the final answer, which doesn’t provide detailed guidance throughout the reasoning process.

What's the solution?

The researchers propose a method to create an implicit PRM by training an ORM using simpler response-level labels instead of detailed step-by-step labels. They show that by adjusting how rewards are calculated based on the likelihood of correct outcomes, they can effectively train a model that learns from less data. Their experiments demonstrate that this implicit PRM performs better than traditional methods while using significantly less training data, thus making the training process more efficient.

Why it matters?

This research is important because it simplifies the training of AI systems that need to reason through complex problems. By allowing models to learn from less detailed information, it opens up new possibilities for improving reasoning capabilities in various applications, such as mathematics and logical problem-solving, making these advanced AI technologies more accessible and effective.

Abstract

Different from its counterpart outcome reward models (ORMs), which evaluate the entire responses, a process reward model (PRM) scores a reasoning trajectory step by step, providing denser and more fine grained rewards. However, training a PRM requires labels annotated at every intermediate step, presenting significant challenges for both manual and automatic data collection. This paper aims to address this challenge. Both theoretically and empirically, we show that an implicit PRM can be obtained at no additional cost, by simply training an ORM on the cheaper response-level labels. The only assumption is to parameterize the outcome reward as the log-likelihood ratios of the policy and reference models, which can be optimized regardless of the specific choice of loss objectives. In experiments, we instantiate our implicit PRMs with various objectives and evaluate their performance on MATH. We show that our implicit PRM outperforms a strong MCTS-based baseline \'a la Math-Shepherd using less than 1/38 of the training data. Its performance can be further improved with majority voting. We further find that scaling up instructions and responses benefits our implicit PRM, and the latter brings a larger gain. Particularly, we find that our implicit PRM, when instantiated with the cross-entropy (CE) loss, is more data-efficient and can keep improving generation models even when trained with only one response per instruction, the setup that suffers from extreme data scarcity and imbalance. Further, instructions should be relevant to downstream tasks while the diversity of responses does not bring gains. Surprisingly, training on extra Math-Shepherd step labels brings no further improvements to our implicit PRM trained on only outcome data. We hope that our work will encourage a rethinking of PRM training approaches and contribute to making training PRMs more accessible.