< Explain other AI papers

Inference-Time Scaling for Generalist Reward Modeling

Zijun Liu, Peiyi Wang, Runxin Xu, Shirong Ma, Chong Ruan, Peng Li, Yang Liu, Yu Wu

2025-04-04

Inference-Time Scaling for Generalist Reward Modeling

Summary

This paper is about making AI language models better at reasoning by improving how they're rewarded during training.

What's the problem?

It's hard to give AI language models good rewards for complex tasks, especially when there aren't clear rules or right answers. This makes it difficult for them to learn to reason effectively.

What's the solution?

The researchers developed a new method that uses AI to create its own rules and critiques to reward the language model. They also found a way to use more computing power during the rewarding process to make it more effective.

Why it matters?

This work matters because it can lead to AI language models that are better at reasoning and can solve more complex problems.

Abstract

Reinforcement learning (RL) has been widely adopted in post-training for large language models (LLMs) at scale. Recently, the incentivization of reasoning capabilities in LLMs from RL indicates that proper learning methods could enable effective inference-time scalability. A key challenge of RL is to obtain accurate reward signals for LLMs in various domains beyond verifiable questions or artificial rules. In this work, we investigate how to improve reward modeling (RM) with more inference compute for general queries, i.e. the inference-time scalability of generalist RM, and further, how to improve the effectiveness of performance-compute scaling with proper learning methods. For the RM approach, we adopt pointwise generative reward modeling (GRM) to enable flexibility for different input types and potential for inference-time scaling. For the learning method, we propose Self-Principled Critique Tuning (SPCT) to foster scalable reward generation behaviors in GRMs through online RL, to generate principles adaptively and critiques accurately, resulting in DeepSeek-GRM models. Furthermore, for effective inference-time scaling, we use parallel sampling to expand compute usage, and introduce a meta RM to guide voting process for better scaling performance. Empirically, we show that SPCT significantly improves the quality and scalability of GRMs, outperforming existing methods and models in various RM benchmarks without severe biases, and could achieve better performance compared to training-time scaling. DeepSeek-GRM still meets challenges in some tasks, which we believe can be addressed by future efforts in generalist reward systems. The models will be released and open-sourced.