< Explain other AI papers

GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning

Jian Zhao, Runze Liu, Kaiyan Zhang, Zhimu Zhou, Junqi Gao, Dong Li, Jiafei Lyu, Zhouyi Qian, Biqing Qi, Xiu Li, Bowen Zhou

2025-04-04

GenPRM: Scaling Test-Time Compute of Process Reward Models via
  Generative Reasoning

Summary

This paper is about improving how AI language models learn to reason by using a system that rewards them for showing their work step-by-step.

What's the problem?

Current systems that reward AI for reasoning have limited ways to check their work and don't fully use the AI's ability to generate text.

What's the solution?

The researchers created a new system called GenPRM that makes the AI explain its reasoning process and then checks each step for correctness. This helps the AI learn to reason more effectively.

Why it matters?

This work matters because it can lead to AI language models that are better at reasoning and problem-solving, and can even outperform some of the most advanced AI models available.

Abstract

Recent advancements in Large Language Models (LLMs) have shown that it is promising to utilize Process Reward Models (PRMs) as verifiers to enhance the performance of LLMs. However, current PRMs face three key challenges: (1) limited process supervision and generalization capabilities, (2) dependence on scalar value prediction without leveraging the generative abilities of LLMs, and (3) inability to scale the test-time compute of PRMs. In this work, we introduce GenPRM, a generative process reward model that performs explicit Chain-of-Thought (CoT) reasoning with code verification before providing judgment for each reasoning step. To obtain high-quality process supervision labels and rationale data, we propose Relative Progress Estimation (RPE) and a rationale synthesis framework that incorporates code verification. Experimental results on ProcessBench and several mathematical reasoning tasks show that GenPRM significantly outperforms prior PRMs with only 23K training data from MATH dataset. Through test-time scaling, a 1.5B GenPRM outperforms GPT-4o, and a 7B GenPRM surpasses Qwen2.5-Math-PRM-72B on ProcessBench. Additionally, GenPRM demonstrates strong abilities to serve as a critic model for policy model refinement. This work establishes a new paradigm for process supervision that bridges the gap between PRMs and critic models in LLMs. Our code, model, and data will be available in https://ryanliu112.github.io/GenPRM.