< Explain other AI papers

SPARK: Stepwise Process-Aware Rewards for Reference-Free Reinforcement Learning

Salman Rahman, Sruthi Gorantla, Arpit Gupta, Swastik Roy, Nanyun Peng, Yang Liu

2025-12-09

SPARK: Stepwise Process-Aware Rewards for Reference-Free Reinforcement Learning

Summary

This paper introduces a new method, SPARK, for training reward models used in reinforcement learning, specifically for tasks requiring step-by-step reasoning like solving math problems. It aims to reduce the need for humans to provide detailed feedback at every step of the process.

What's the problem?

Training reward models usually requires a lot of human effort to label each step of a solution as good or bad, or a perfect 'gold standard' answer to compare against. Getting this step-by-step feedback is expensive and sometimes impossible, especially when there isn't a single right way to solve a problem or a readily available correct answer to check against.

What's the solution?

SPARK works in three stages. First, it uses two AI models: one to generate many different solutions to a problem, and another to check those solutions for correctness, using both self-checking (does the solution make sense on its own?) and meta-critique (does it follow logical reasoning?). Then, the feedback from the checker is used to train a reward model. Finally, this reward model is used to help another AI learn to solve problems, with some rules added to prevent it from 'gaming' the system for a higher reward. They tested this on math problems and found it performed better than using human-provided feedback or even a very powerful AI like GPT-4o to judge the solutions.

Why it matters?

This research is important because it allows AI systems to learn complex tasks, like mathematical reasoning, without relying heavily on expensive and scarce human feedback. This opens up possibilities for applying reinforcement learning to a wider range of problems where getting perfect answers or detailed step-by-step guidance is difficult or impossible, ultimately making AI more adaptable and useful in real-world scenarios.

Abstract

Process reward models (PRMs) that provide dense, step-level feedback have shown promise for reinforcement learning, yet their adoption remains limited by the need for expensive step-level annotations or ground truth references. We propose SPARK: a three-stage framework where in the first stage a generator model produces diverse solutions and a verifier model evaluates them using parallel scaling (self-consistency) and sequential scaling (meta-critique). In the second stage, we use these verification outputs as synthetic training data to fine-tune generative process reward models, which subsequently serve as reward signals during training. We show that aggregating multiple independent verifications at the step level produces training data for process reward models that surpass ground-truth outcome supervision, achieving 67.5 F1 on ProcessBench (a benchmark for identifying erroneous steps in mathematical reasoning) compared to 66.4 for reference-guided training and 61.9 for GPT-4o. In the final stage, we apply our generative PRM with chain-of-thought verification (PRM-CoT) as the reward model in RL experiments on mathematical reasoning, and introduce format constraints to prevent reward hacking. Using Qwen2.5-Math-7B, we achieve 47.4% average accuracy across six mathematical reasoning benchmarks, outperforming ground-truth-based RLVR (43.9%). Our work enables reference-free RL training that exceeds ground-truth methods, opening new possibilities for domains lacking verifiable answers or accessible ground truth.