Agentic Entropy-Balanced Policy Optimization
Guanting Dong, Licheng Bao, Zhongyuan Wang, Kangzhi Zhao, Xiaoxi Li, Jiajie Jin, Jinghan Yang, Hangyu Mao, Fuzheng Zhang, Kun Gai, Guorui Zhou, Yutao Zhu, Ji-Rong Wen, Zhicheng Dou
2025-10-17
Summary
This paper focuses on improving how computer programs, specifically 'web agents,' learn to use tools online to complete complex tasks. These agents use a technique called 'Reinforcement Learning,' where they learn by trial and error, and the paper addresses a key issue that can hinder their learning process.
What's the problem?
When teaching these web agents, a common method encourages them to explore uncertain options. However, relying too heavily on this 'uncertainty' signal can actually make the agent get stuck and fail to learn effectively, leading to a breakdown in training. Essentially, the agent gets lost exploring too many possibilities and can't settle on a good strategy.
What's the solution?
The researchers developed a new learning algorithm called Agentic Entropy-Balanced Policy Optimization, or AEPO. This algorithm tackles the problem in two main ways: first, it carefully manages how the agent explores different options, making sure it doesn't get stuck branching out into too many uncertain paths. Second, it adjusts how the agent updates its strategy based on those uncertain choices, ensuring that valuable learning isn't lost and that the agent focuses on the most important areas for improvement.
Why it matters?
This work is important because it significantly improves the ability of web agents to learn and perform complex tasks online. By addressing the issue of over-reliance on uncertainty, AEPO allows agents to learn more efficiently and achieve better results on challenging tasks, as demonstrated by strong performance on several benchmark datasets. This means we can build more capable and reliable AI systems that can help us with a wider range of online activities.
Abstract
Recently, Agentic Reinforcement Learning (Agentic RL) has made significant progress in incentivizing the multi-turn, long-horizon tool-use capabilities of web agents. While mainstream agentic RL algorithms autonomously explore high-uncertainty tool-call steps under the guidance of entropy, excessive reliance on entropy signals can impose further constraints, leading to the training collapse. In this paper, we delve into the challenges caused by entropy and propose the Agentic Entropy-Balanced Policy Optimization (AEPO), an agentic RL algorithm designed to balance entropy in both the rollout and policy update phases. AEPO comprises two core components: (1) a dynamic entropy-balanced rollout mechanism that adaptively allocate global and branch sampling budget through entropy pre-monitoring, while imposing a branch penalty on consecutive high-entropy tool-call steps to prevent over-branching issues; and (2) Entropy-Balanced Policy Optimization that inserts a stop-gradient operation into the high-entropy clipping term to preserve and properly rescale gradients on high-entropy tokens, while incorporating entropy-aware advantage estimation to prioritize learning on high-uncertainty tokens. Results across 14 challenging datasets show that AEPO consistently outperforms 7 mainstream RL algorithms. With just 1K RL samples, Qwen3-14B with AEPO achieves impressive results: 47.6% on GAIA, 11.2% on Humanity's Last Exam, and 43.0% on WebWalker for Pass@1; 65.0% on GAIA, 26.0% on Humanity's Last Exam, and 70.0% on WebWalker for Pass@5. Further analysis reveals that AEPO improves rollout sampling diversity while maintaining stable policy entropy, facilitating scalable web agent training.