< Explain other AI papers

Evolving Language Models without Labels: Majority Drives Selection, Novelty Promotes Variation

Yujun Zhou, Zhenwen Liang, Haolin Liu, Wenhao Yu, Kishan Panaganti, Linfeng Song, Dian Yu, Xiangliang Zhang, Haitao Mi, Dong Yu

2025-09-19

Evolving Language Models without Labels: Majority Drives Selection, Novelty Promotes Variation

Summary

This paper introduces a new method, EVOL-RL, for improving large language models without needing human feedback or labeled data. It addresses a common problem where models get stuck generating repetitive and uncreative responses when trying to learn on their own.

What's the problem?

Large language models are often improved using reinforcement learning, but usually this requires people to judge the quality of the model’s responses. When models try to improve *without* this human feedback, they tend to become overly cautious and predictable. This leads to a 'collapse' where the model generates shorter, less diverse, and ultimately less useful answers. Existing methods to fix this only offer temporary stability and worsen the problem over time, limiting the model’s ability to explore new possibilities and generalize to different tasks.

What's the solution?

The researchers developed EVOL-RL, which stands for EVolution-Oriented and Label-free Reinforcement Learning. It works by balancing two key ideas: stability and variation. The model still favors answers that are commonly agreed upon (stability), but it *also* gets rewarded for generating responses that are different from what it has produced before (variation). This 'novelty' is measured by how different the reasoning behind the answer is, not just the answer itself. They also used techniques like asymmetric clipping and entropy regularization to help the model learn effectively. Essentially, EVOL-RL encourages the model to explore new ideas while still staying grounded in reliable responses.

Why it matters?

This research is important because it allows large language models to continuously improve themselves without constant human intervention. The results show significant improvements in the model’s ability to answer questions correctly and generate more detailed, thoughtful responses. It also makes the model more adaptable to new situations and tasks, meaning it can generalize its knowledge better. This is a step towards creating AI systems that can learn and evolve independently, making them more powerful and versatile.

Abstract

Large language models (LLMs) are increasingly trained with reinforcement learning from verifiable rewards (RLVR), yet real-world deployment demands models that can self-improve without labels or external judges. Existing label-free methods, confidence minimization, self-consistency, or majority-vote objectives, stabilize learning but steadily shrink exploration, causing an entropy collapse: generations become shorter, less diverse, and brittle. Unlike prior approaches such as Test-Time Reinforcement Learning (TTRL), which primarily adapt models to the immediate unlabeled dataset at hand, our goal is broader: to enable general improvements without sacrificing the model's inherent exploration capacity and generalization ability, i.e., evolving. We formalize this issue and propose EVolution-Oriented and Label-free Reinforcement Learning (EVOL-RL), a simple rule that couples stability with variation under a label-free setting. EVOL-RL keeps the majority-voted answer as a stable anchor (selection) while adding a novelty-aware reward that favors responses whose reasoning differs from what has already been produced (variation), measured in semantic space. Implemented with GRPO, EVOL-RL also uses asymmetric clipping to preserve strong signals and an entropy regularizer to sustain search. This majority-for-selection + novelty-for-variation design prevents collapse, maintains longer and more informative chains of thought, and improves both pass@1 and pass@n. EVOL-RL consistently outperforms the majority-only TTRL baseline; e.g., training on label-free AIME24 lifts Qwen3-4B-Base AIME25 pass@1 from TTRL's 4.6% to 16.4%, and pass@16 from 18.5% to 37.9%. EVOL-RL not only prevents diversity collapse but also unlocks stronger generalization across domains (e.g., GPQA). Furthermore, we demonstrate that EVOL-RL also boosts performance in the RLVR setting, highlighting its broad applicability.