< Explain other AI papers

CoBA-RL: Capability-Oriented Budget Allocation for Reinforcement Learning in LLMs

Zhiyuan Yao, Yi-Kai Zhang, Yuxin Chen, Yueqing Sun, Zishan Xu, Yu Yang, Tianhao Hu, Qi Gu, Hui Su, Xunliang Cai

2026-02-04

CoBA-RL: Capability-Oriented Budget Allocation for Reinforcement Learning in LLMs

Summary

This paper introduces a new method, CoBA-RL, for improving how large language models (LLMs) learn through a process called reinforcement learning. It focuses on making the learning process more efficient by smartly deciding how much computing power to dedicate to different training examples.

What's the problem?

Currently, when LLMs are trained using reinforcement learning, they often get the same amount of training time for every example, which isn't ideal. Some examples are easier for the model to learn from than others. Existing methods that try to adjust training time often look only at whether the model gets the right answer on a specific task, and don't consider *how much* the model is actually learning at each step. This means resources can be wasted on examples that don't help the model improve much, or not enough time is spent on examples that could lead to big gains.

What's the solution?

CoBA-RL solves this by estimating how much a particular training example will help the model improve its reasoning abilities. It uses something called a 'Capability-Oriented Value function' to predict the potential learning gain from each example. Then, it uses a clever system, like prioritizing the most valuable tasks, to allocate more computing power to examples that are likely to lead to the biggest improvements in the model's performance. This way, the model focuses on learning from the examples that matter most.

Why it matters?

This research is important because it shows that carefully managing computing resources during LLM training can significantly improve performance and efficiency. By focusing on the 'value' of each training example, CoBA-RL helps LLMs learn better and faster, which is crucial for making these powerful models more practical and accessible.

Abstract

Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a key approach for enhancing LLM reasoning.However, standard frameworks like Group Relative Policy Optimization (GRPO) typically employ a uniform rollout budget, leading to resource inefficiency. Moreover, existing adaptive methods often rely on instance-level metrics, such as task pass rates, failing to capture the model's dynamic learning state. To address these limitations, we propose CoBA-RL, a reinforcement learning algorithm designed to adaptively allocate rollout budgets based on the model's evolving capability. Specifically, CoBA-RL utilizes a Capability-Oriented Value function to map tasks to their potential training gains and employs a heap-based greedy strategy to efficiently self-calibrate the distribution of computational resources to samples with high training value. Extensive experiments demonstrate that our approach effectively orchestrates the trade-off between exploration and exploitation, delivering consistent generalization improvements across multiple challenging benchmarks. These findings underscore that quantifying sample training value and optimizing budget allocation are pivotal for advancing LLM post-training efficiency.