Expanding RL with Verifiable Rewards Across Diverse Domains
Yi Su, Dian Yu, Linfeng Song, Juntao Li, Haitao Mi, Zhaopeng Tu, Min Zhang, Dong Yu
2025-04-01
Summary
This paper explores using AI to train other AI in different fields like medicine and economics, by giving rewards for correct answers.
What's the problem?
It's hard to train AI in these fields because it's difficult to find enough correct answers to teach it what's right and wrong.
What's the solution?
The researchers found that they could use other AI models to automatically check the answers and give rewards, even if the answers weren't perfectly structured.
Why it matters?
This work matters because it can make it easier to train AI in many different areas, even when there isn't a lot of perfect data available.
Abstract
Reinforcement learning (RL) with verifiable rewards (RLVR) has shown promising results in mathematical reasoning and coding tasks where well-structured reference answers are available. However, its applicability to broader domains remains underexplored. In this work, we study the extension of RLVR to more diverse domains such as medicine, chemistry, psychology, and economics. We observe high agreement in binary judgments across different large language models (LLMs) when objective reference answers exist, which challenges the necessity of large-scale annotation for training domain-specific reward models. To address the limitations of binary rewards when handling unstructured reference answers, we further incorporate model-based soft scoring into RLVR to improve its flexibility. Our experiments show that a distilled generative reward model can serve as an effective cross-domain verifier, providing reliable reward signals for RL without requiring domain-specific annotations. By fine-tuning a base 7B model using various RL algorithms against our reward model, we obtain policies that outperform state-of-the-art open-source aligned LLMs such as Qwen2.5-72B-Instruct and DeepSeek-R1-Distill-Qwen-32B by a large margin, across domains in free-form answer settings. This also strengthens RLVR's robustness and scalability, highlighting its potential for real-world applications with noisy or weak labels.