< Explain other AI papers

F-GRPO: Don't Let Your Policy Learn the Obvious and Forget the Rare

Daniil Plyusov, Alexey Gorbatovski, Boris Shaposhnikov, Viacheslav Sinii, Alexey Malakhov, Daniil Gavrilov

2026-02-09

F-GRPO: Don't Let Your Policy Learn the Obvious and Forget the Rare

Summary

This paper focuses on improving reinforcement learning algorithms used to train large language models, specifically when those models are given rewards to guide their learning.

What's the problem?

A common technique in this type of learning involves looking at groups of possible solutions to figure out which ones are best. However, checking large groups takes a lot of computing power, so researchers often use smaller groups. The issue is that smaller groups can miss really good, but uncommon, solutions and instead focus on more typical, potentially less optimal, answers. Even as the overall number of good solutions increases, the algorithm can still overlook some of them, and even shrink the focus on those overlooked good solutions.

What's the solution?

The researchers came up with a way to adjust how much weight is given to updates based on how often a solution is found. Inspired by a technique called 'Focal loss', they reduce the impact of updates that come from solutions that are already very successful. This helps the algorithm pay more attention to those rarer, but potentially better, solutions without needing to increase the size of the groups being checked or adding more computational cost.

Why it matters?

This method significantly improves the performance of large language models, like Qwen2.5-7B, on various tasks. It increases the likelihood of getting a correct answer on the first try (pass@1) or within a certain number of attempts (pass@256) without requiring more computing resources. This means we can get more reliable and accurate results from these models more efficiently.

Abstract

Reinforcement Learning with Verifiable Rewards (RLVR) is commonly based on group sampling to estimate advantages and stabilize policy updates. In practice, large group sizes are not feasible due to computational limits, which biases learning toward trajectories that are already likely. Smaller groups often miss rare-correct trajectories while still containing mixed rewards, concentrating probability on common solutions. We derive the probability that updates miss rare-correct modes as a function of group size, showing non-monotonic behavior, and characterize how updates redistribute mass within the correct set, revealing that unsampled-correct mass can shrink even as total correct mass grows. Motivated by this analysis, we propose a difficulty-aware advantage scaling coefficient, inspired by Focal loss, that down-weights updates on high-success prompts. The lightweight modification can be directly integrated into any group-relative RLVR algorithm such as GRPO, DAPO, and CISPO. On Qwen2.5-7B across in-domain and out-of-domain benchmarks, our method improves pass@256 from 64.1 rightarrow 70.3 (GRPO), 69.3 rightarrow 72.5 (DAPO), and 73.2 rightarrow 76.8 (CISPO), while preserving or improving pass@1, without increasing group size or computational cost.