< Explain other AI papers

Agentic Policy Optimization via Instruction-Policy Co-Evolution

Han Zhou, Xingchen Wan, Ivan Vulić, Anna Korhonen

2025-12-02

Agentic Policy Optimization via Instruction-Policy Co-Evolution

Summary

This paper introduces a new way to train AI agents, specifically large language models, to be better at complex tasks that require multiple steps and using different tools. It focuses on how the instructions given to these agents can be improved automatically during the learning process.

What's the problem?

Currently, AI agents are given fixed instructions that humans design. These instructions might not be the best possible for the AI, and they definitely won't stay optimal as the AI learns and gets better at the task. It's like giving someone a set of directions and expecting them to follow it perfectly even as they gain experience and find better routes themselves. The instructions need to adapt alongside the AI's growing abilities.

What's the solution?

The researchers created a system called INSPO that constantly evolves both the AI's strategy (its 'policy') and the instructions it receives. INSPO keeps a bunch of different instruction options and tests them out. It figures out which instructions lead to good results and keeps those, while getting rid of the ones that don't work well. Then, it uses the AI itself to come up with new instructions based on what it has learned from past experiences, essentially having the AI teach itself how to be instructed more effectively.

Why it matters?

This work is important because it allows AI agents to become much more capable and efficient. By automatically improving the instructions, the AI can discover better ways to solve problems and achieve higher performance. It means we don't have to rely solely on humans to craft the perfect instructions, and the AI can continuously refine its approach to reasoning and problem-solving.

Abstract

Reinforcement Learning with Verifiable Rewards (RLVR) has advanced the reasoning capability of large language models (LLMs), enabling autonomous agents that can conduct effective multi-turn and tool-integrated reasoning. While instructions serve as the primary protocol for defining agents, RLVR typically relies on static and manually designed instructions. However, those instructions may be suboptimal for the base model, and the optimal instruction may change as the agent's policy improves and explores the interaction with the environment. To bridge the gap, we introduce INSPO, a novel Instruction-Policy co-evolution framework that integrates instruction optimization as a dynamic component of the reinforcement learning (RL) loop. INSPO maintains a dynamic population of instruction candidates that are sampled with questions, where reward signals in RL loops are automatically attributed to each instruction, and low performers are periodically pruned. New instructions are generated and verified through an on-policy reflection mechanism, where an LLM-based optimizer analyzes past experience from a replay buffer and evolves more effective strategies given the current policy. We conduct extensive experiments on multi-turn retrieval and reasoning tasks, demonstrating that INSPO substantially outperforms strong baselines relying on static instructions. INSPO discovers innovative instructions that guide the agent toward more strategic reasoning paths, achieving substantial performance gains with only a marginal increase in computational overhead.