< Explain other AI papers

Beyond Length Scaling: Synergizing Breadth and Depth for Generative Reward Models

Qiyuan Zhang, Yufei Wang, Tianhe Wu, Can Xu, Qingfeng Sun, Kai Zheng, Xue Liu, Chen Ma

2026-03-04

Beyond Length Scaling: Synergizing Breadth and Depth for Generative Reward Models

Summary

This paper focuses on improving how we evaluate large language models, specifically by making the 'reasoning' process they use more reliable. It introduces a new framework called Mix-GRM that helps models become better at explaining their answers and getting rewarded for good reasoning.

What's the problem?

Currently, when trying to get language models to explain their thinking (Chain-of-Thought reasoning), simply making those explanations longer doesn't always work well. The problem is that there are different *types* of reasoning – some models are good at exploring many different ideas (Breadth-CoT), while others are good at carefully analyzing a single line of thought (Depth-CoT). Existing methods treat all reasoning the same, ignoring these differences and limiting performance.

What's the solution?

The researchers developed Mix-GRM, which breaks down the reasoning process into these two types – Breadth-CoT and Depth-CoT – and then trains the model to use the *right* type of reasoning for the task at hand. They use a combination of techniques, including fine-tuning the model with examples and then using reinforcement learning with rewards based on how well the reasoning checks out. This helps the model learn to choose between exploring broadly or digging deeply depending on what the question needs.

Why it matters?

This work is important because it significantly improves the accuracy of evaluating language models. By understanding and utilizing different reasoning styles, Mix-GRM outperforms previous methods by a substantial margin. It also shows that models can learn to *automatically* choose the best reasoning approach for a given task, which is a step towards more intelligent and adaptable AI systems.

Abstract

Recent advancements in Generative Reward Models (GRMs) have demonstrated that scaling the length of Chain-of-Thought (CoT) reasoning considerably enhances the reliability of evaluation. However, current works predominantly rely on unstructured length scaling, ignoring the divergent efficacy of different reasoning mechanisms: Breadth-CoT (B-CoT, i.e., multi-dimensional principle coverage) and Depth-CoT (D-CoT, i.e., substantive judgment soundness). To address this, we introduce Mix-GRM, a framework that reconfigures raw rationales into structured B-CoT and D-CoT through a modular synthesis pipeline, subsequently employing Supervised Fine-Tuning (SFT) and Reinforcement Learning with Verifiable Rewards (RLVR) to internalize and optimize these mechanisms. Comprehensive experiments demonstrate that Mix-GRM establishes a new state-of-the-art across five benchmarks, surpassing leading open-source RMs by an average of 8.2\%. Our results reveal a clear divergence in reasoning: B-CoT benefits subjective preference tasks, whereas D-CoT excels in objective correctness tasks. Consequently, misaligning the reasoning mechanism with the task directly degrades performance. Furthermore, we demonstrate that RLVR acts as a switching amplifier, inducing an emergent polarization where the model spontaneously allocates its reasoning style to match task demands. The synthesized data and models are released at https://huggingface.co/collections/DonJoey/mix-grm{Hugging Face}, and the code is released at https://github.com/Don-Joey/Mix-GRM{Github}.