< Explain other AI papers

SCOPE: Prompt Evolution for Enhancing Agent Effectiveness

Zehua Pei, Hui-Ling Zhen, Shixiong Kai, Sinno Jialin Pan, Yunhe Wang, Mingxuan Yuan, Bei Yu

2025-12-18

SCOPE: Prompt Evolution for Enhancing Agent Effectiveness

Summary

This paper introduces a new method called SCOPE that helps AI agents, specifically those powered by large language models, better understand and use the information they're given to complete tasks.

What's the problem?

AI agents are getting better at handling lots of information, but they struggle when that information is constantly changing or very complex. The instructions given to these agents, called prompts, are usually fixed and can't adapt to the situation, leading to errors and needing constant adjustments. Essentially, the agents have the data but can't effectively *use* it because their instructions aren't flexible enough.

What's the solution?

SCOPE tackles this by treating context management like a continuous improvement process. It analyzes how the agent performs, learns from its mistakes, and automatically rewrites the agent's instructions to be more effective. It does this in two ways: making quick fixes for immediate problems and developing broader strategies for long-term success. It also explores different approaches to ensure it finds the best strategy for any given task, kind of like trying out different study methods to see what works best for a test.

Why it matters?

This research is important because it significantly improves the reliability of AI agents without needing humans to constantly intervene. The experiments showed a big jump in success rates, meaning these agents can handle more complex tasks and be more helpful in real-world situations where information is always changing. This moves us closer to AI systems that can truly adapt and learn on their own.

Abstract

Large Language Model (LLM) agents are increasingly deployed in environments that generate massive, dynamic contexts. However, a critical bottleneck remains: while agents have access to this context, their static prompts lack the mechanisms to manage it effectively, leading to recurring Corrective and Enhancement failures. To address this capability gap, we introduce SCOPE (Self-evolving Context Optimization via Prompt Evolution). SCOPE frames context management as an online optimization problem, synthesizing guidelines from execution traces to automatically evolve the agent's prompt. We propose a Dual-Stream mechanism that balances tactical specificity (resolving immediate errors) with strategic generality (evolving long-term principles). Furthermore, we introduce Perspective-Driven Exploration to maximize strategy coverage, increasing the likelihood that the agent has the correct strategy for any given task. Experiments on the HLE benchmark show that SCOPE improves task success rates from 14.23\% to 38.64\% without human intervention. We make our code publicly available at https://github.com/JarvisPei/SCOPE.