< Explain other AI papers

Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models

Qizheng Zhang, Changran Hu, Shubhangi Upasani, Boyuan Ma, Fenglu Hong, Vamsidhar Kamanuru, Jay Rainton, Chen Wu, Mengmeng Ji, Hanchen Li, Urmish Thakker, James Zou, Kunle Olukotun

2025-10-07

Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models

Summary

This paper introduces a new method called ACE (Agentic Context Engineering) for improving how large language models (LLMs) use information to solve problems. Instead of constantly retraining the model itself, ACE focuses on cleverly organizing and updating the information *given* to the model, like instructions or background knowledge.

What's the problem?

LLMs are getting better at tasks, but they often struggle when given a lot of context. Two main issues arise: 'brevity bias' where the model prefers short, simple answers and loses important details, and 'context collapse' where repeatedly editing the information provided to the model actually makes it *less* accurate over time. Essentially, the model forgets things or gets confused as the context changes.

What's the solution?

ACE tackles these problems by treating the information given to the LLM as a constantly evolving 'playbook'. It doesn't just add or replace information, but carefully organizes it, refines existing strategies, and adds new ones in a structured way. This prevents the model from losing important details and allows it to handle much larger amounts of information effectively. The system learns what works best by observing how the model performs, without needing someone to specifically label correct answers.

Why it matters?

This research is important because it shows we can make LLMs much more powerful and efficient without needing to spend huge amounts of time and money retraining them. By focusing on how we *present* information to the model, ACE allows it to perform better on complex tasks, like acting as an agent or analyzing financial data, and even compete with top-performing, commercially developed AI systems, all while using a smaller, more accessible model.

Abstract

Large language model (LLM) applications such as agents and domain-specific reasoning increasingly rely on context adaptation -- modifying inputs with instructions, strategies, or evidence, rather than weight updates. Prior approaches improve usability but often suffer from brevity bias, which drops domain insights for concise summaries, and from context collapse, where iterative rewriting erodes details over time. Building on the adaptive memory introduced by Dynamic Cheatsheet, we introduce ACE (Agentic Context Engineering), a framework that treats contexts as evolving playbooks that accumulate, refine, and organize strategies through a modular process of generation, reflection, and curation. ACE prevents collapse with structured, incremental updates that preserve detailed knowledge and scale with long-context models. Across agent and domain-specific benchmarks, ACE optimizes contexts both offline (e.g., system prompts) and online (e.g., agent memory), consistently outperforming strong baselines: +10.6% on agents and +8.6% on finance, while significantly reducing adaptation latency and rollout cost. Notably, ACE could adapt effectively without labeled supervision and instead by leveraging natural execution feedback. On the AppWorld leaderboard, ACE matches the top-ranked production-level agent on the overall average and surpasses it on the harder test-challenge split, despite using a smaller open-source model. These results show that comprehensive, evolving contexts enable scalable, efficient, and self-improving LLM systems with low overhead.