< Explain other AI papers

IIB-LPO: Latent Policy Optimization via Iterative Information Bottleneck

Huilin Deng, Hongchen Luo, Yue Zhu, Long Li, Zhuoyue Chen, Xinghao Zhao, Ming Li, Jihai Zhang, Mengchang Wang, Yang Cao, Yu Kang

2026-01-12

IIB-LPO: Latent Policy Optimization via Iterative Information Bottleneck

Summary

This paper focuses on improving how Large Language Models (LLMs) learn to reason using a technique called Reinforcement Learning with Verifiable Rewards, but tackles a specific issue that's been holding it back.

What's the problem?

When LLMs are trained to reason through trial and error, they often get stuck exploring only very similar solutions. Imagine trying to find the best route on a map, but you only ever try slightly different versions of the same route – you might miss the much better, but different, path. This happens because the LLM's initial attempts at reasoning are too alike, leading to a 'collapse' in exploration and preventing it from finding truly optimal solutions. Previous attempts to fix this, like encouraging randomness, either lead to the model just talking a lot without saying anything useful, or don't overcome the model's existing tendencies.

What's the solution?

The researchers developed a new method called Latent Policy Optimization via Iterative Information Bottleneck (IIB-LPO). Instead of just randomly changing the words the model uses, IIB-LPO focuses on making the *way* the model thinks more diverse. It identifies points in the reasoning process where the model is uncertain and then deliberately creates different branches in its thought process from there. It also uses a principle called the 'Information Bottleneck' to make sure these explorations are both concise and actually provide new information, preventing the model from rambling. Essentially, it guides the model to explore different reasoning *strategies* rather than just different word choices.

Why it matters?

This research is important because it significantly improves the reasoning abilities of LLMs. By overcoming the exploration collapse problem, the model can find more accurate and diverse solutions to complex problems, specifically in mathematical reasoning. The new method outperforms previous techniques, achieving better results in both accuracy and the variety of solutions generated, meaning LLMs can become more reliable and creative problem-solvers.

Abstract

Recent advances in Reinforcement Learning with Verifiable Rewards (RLVR) for Large Language Model (LLM) reasoning have been hindered by a persistent challenge: exploration collapse. The semantic homogeneity of random rollouts often traps models in narrow, over-optimized behaviors. While existing methods leverage policy entropy to encourage exploration, they face inherent limitations. Global entropy regularization is susceptible to reward hacking, which can induce meaningless verbosity, whereas local token-selective updates struggle with the strong inductive bias of pre-trained models. To address this, we propose Latent Policy Optimization via Iterative Information Bottleneck (IIB-LPO), a novel approach that shifts exploration from statistical perturbation of token distributions to topological branching of reasoning trajectories. IIB-LPO triggers latent branching at high-entropy states to diversify reasoning paths and employs the Information Bottleneck principle both as a trajectory filter and a self-reward mechanism, ensuring concise and informative exploration. Empirical results across four mathematical reasoning benchmarks demonstrate that IIB-LPO achieves state-of-the-art performance, surpassing prior methods by margins of up to 5.3% in accuracy and 7.4% in diversity metrics.