The Choice of Divergence: A Neglected Key to Mitigating Diversity Collapse in Reinforcement Learning with Verifiable Reward
Long Li, Jiaran Hao, Jason Klein Liu, Zhijian Zhou, Xiaoyu Tan, Wei Chu, Zhe Wang, Shirui Pan, Chao Qu, Yuan Qi
2025-09-12
Summary
This paper investigates a tricky problem that happens when you try to make powerful AI language models even better using a technique called Reinforcement Learning with Verifiable Reward. Specifically, it looks at why improving the model's ability to get the *right* answer on a single try sometimes makes it worse at consistently getting the right answer across multiple attempts.
What's the problem?
When fine-tuning these large language models with reinforcement learning, researchers noticed a paradox: while the model gets better at giving a correct answer on the first try, its overall success rate across many tries actually *decreases*. It's like teaching a student to ace one question, but then they struggle with the overall test. This often happens because the model 'forgets' things it already knew – a phenomenon called catastrophic forgetting – and loses its ability to explore different solution paths. Existing methods haven't really focused on how the way the model is encouraged to stick to its learned behavior impacts this problem.
What's the solution?
The researchers propose a new approach called Diversity-Preserving Hybrid RL (DPH-RL). The key idea is to use a special mathematical tool, called a 'divergence term', to help the model remember its past knowledge. Instead of pushing the model to focus on just one 'best' answer, DPH-RL encourages it to consider a wider range of possible solutions, referencing its original behavior. They use specific types of divergences, like forward-KL and JS-divergence, which act like a 'rehearsal' mechanism, constantly reminding the model of what it already knows. Importantly, this method is efficient because it doesn't require constantly comparing to a separate, unchanging version of the model.
Why it matters?
This work is important because it identifies a previously overlooked aspect of improving AI language models: the choice of how to balance learning new things with remembering old ones. By showing that the right kind of 'divergence term' can prevent forgetting and improve overall performance, the researchers provide a valuable tool for building more reliable and versatile AI systems that can reason and solve problems more effectively.
Abstract
A central paradox in fine-tuning Large Language Models (LLMs) with Reinforcement Learning with Verifiable Reward (RLVR) is the frequent degradation of multi-attempt performance (Pass@k) despite improvements in single-attempt accuracy (Pass@1). This is often accompanied by catastrophic forgetting, where models lose previously acquired skills. While various methods have been proposed, the choice and function of the divergence term have been surprisingly unexamined as a proactive solution. We argue that standard RLVR objectives -- both those using the mode-seeking reverse KL-divergence and those forgoing a divergence term entirely -- lack a crucial mechanism for knowledge retention. The reverse-KL actively accelerates this decay by narrowing the policy, while its absence provides no safeguard against the model drifting from its diverse knowledge base. We propose a fundamental shift in perspective: using the divergence term itself as the solution. Our framework, Diversity-Preserving Hybrid RL (DPH-RL), leverages mass-covering f-divergences (like forward-KL and JS-divergence) to function as a rehearsal mechanism. By continuously referencing the initial policy, this approach forces the model to maintain broad solution coverage. Extensive experiments on math and SQL generation demonstrate that DPH-RL not only resolves the Pass@k degradation but improves both Pass@1 and Pass@k in- and out-of-domain. Additionally, DPH-RL is more training-efficient because it computes f-divergence using generator functions, requiring only sampling from the initial policy and no online reference model. Our work highlights a crucial, overlooked axis for improving RLVR, demonstrating that the proper selection of a divergence measure is a powerful tool for building more general and diverse reasoning models.