ProAct: Agentic Lookahead in Interactive Environments
Yangbin Yu, Mingyu Yang, Junyou Li, Yiming Gao, Feiyu Liu, Yijun Yang, Zichuan Lin, Jiafei Lyu, Yicheng Liu, Zhicong Lu, Deheng Ye, Jie Jiang
2026-02-06
Summary
This paper introduces ProAct, a new way to train AI agents powered by Large Language Models (LLMs) to make better long-term plans in complex environments.
What's the problem?
Current LLM-based AI agents often struggle when they need to think several steps ahead. This is because even small errors in predicting what will happen next can build up over time, leading to bad decisions. Imagine trying to plan a route across town, but constantly misjudging how long each turn will take – you’ll likely get lost! The issue is that accurately simulating future possibilities is computationally expensive and prone to error.
What's the solution?
ProAct tackles this problem in two main ways. First, it uses a technique called Grounded LookAhead Distillation (GLAD) to teach the agent how to reason about the future by showing it examples of good planning strategies. It’s like a student learning from a teacher’s worked-out solutions. Second, it introduces a Monte-Carlo Critic (MC-Critic) which helps the agent evaluate its plans more accurately by quickly testing them out in the environment. This provides a more reliable signal for the agent to learn from, without needing to build a complex internal model of the world.
Why it matters?
This research is important because it allows LLM agents to perform much better in tasks requiring complex planning, like solving puzzles or navigating challenging environments. A relatively small 4 billion parameter model using ProAct can now compete with much larger, closed-source AI systems, and it can adapt to new situations effectively. This means we’re getting closer to AI agents that can reliably handle real-world problems that require foresight and strategic thinking.
Abstract
Existing Large Language Model (LLM) agents struggle in interactive environments requiring long-horizon planning, primarily due to compounding errors when simulating future states. To address this, we propose ProAct, a framework that enables agents to internalize accurate lookahead reasoning through a two-stage training paradigm. First, we introduce Grounded LookAhead Distillation (GLAD), where the agent undergoes supervised fine-tuning on trajectories derived from environment-based search. By compressing complex search trees into concise, causal reasoning chains, the agent learns the logic of foresight without the computational overhead of inference-time search. Second, to further refine decision accuracy, we propose the Monte-Carlo Critic (MC-Critic), a plug-and-play auxiliary value estimator designed to enhance policy-gradient algorithms like PPO and GRPO. By leveraging lightweight environment rollouts to calibrate value estimates, MC-Critic provides a low-variance signal that facilitates stable policy optimization without relying on expensive model-based value approximation. Experiments on both stochastic (e.g., 2048) and deterministic (e.g., Sokoban) environments demonstrate that ProAct significantly improves planning accuracy. Notably, a 4B parameter model trained with ProAct outperforms all open-source baselines and rivals state-of-the-art closed-source models, while demonstrating robust generalization to unseen environments. The codes and models are available at https://github.com/GreatX3/ProAct