MemEvolve: Meta-Evolution of Agent Memory Systems
Guibin Zhang, Haotian Ren, Chong Zhan, Zhenhong Zhou, Junhao Wang, He Zhu, Wangchunshu Zhou, Shuicheng Yan
2025-12-24
Summary
This paper introduces a new way to build AI agents that can learn and improve not just *what* they know, but also *how* they remember things. It focuses on making the memory system itself adaptable, rather than just the information stored within it.
What's the problem?
Current AI agents that use memory to learn typically have a memory system designed by humans. This is a limitation because a single memory design isn't ideal for all tasks. While the agent can learn and evolve its knowledge, the way it stores and retrieves information remains fixed, hindering its overall potential. It's like trying to learn everything using only one type of notebook – sometimes you need a calendar, sometimes a whiteboard, and sometimes a detailed journal.
What's the solution?
The researchers developed a framework called MemEvolve that allows both the agent’s knowledge *and* its memory architecture to evolve together. Essentially, the system experiments with different ways of encoding, storing, retrieving, and managing information, finding the best memory setup for the task at hand. They also created EvolveLab, a shared codebase with different memory systems built in, making it easier for other researchers to build on their work and compare results fairly.
Why it matters?
This research is important because it moves us closer to AI agents that are truly adaptable and can handle a wider range of challenges. By allowing the memory system to evolve, these agents can become more efficient and effective, performing better on different tasks and even transferring their learning to new situations. It’s a step towards more intelligent and versatile AI.
Abstract
Self-evolving memory systems are unprecedentedly reshaping the evolutionary paradigm of large language model (LLM)-based agents. Prior work has predominantly relied on manually engineered memory architectures to store trajectories, distill experience, and synthesize reusable tools, enabling agents to evolve on the fly within environment interactions. However, this paradigm is fundamentally constrained by the staticity of the memory system itself: while memory facilitates agent-level evolving, the underlying memory architecture cannot be meta-adapted to diverse task contexts. To address this gap, we propose MemEvolve, a meta-evolutionary framework that jointly evolves agents' experiential knowledge and their memory architecture, allowing agent systems not only to accumulate experience but also to progressively refine how they learn from it. To ground MemEvolve in prior research and foster openness in future self-evolving systems, we introduce EvolveLab, a unified self-evolving memory codebase that distills twelve representative memory systems into a modular design space (encode, store, retrieve, manage), providing both a standardized implementation substrate and a fair experimental arena. Extensive evaluations on four challenging agentic benchmarks demonstrate that MemEvolve achieves (I) substantial performance gains, improving frameworks such as SmolAgent and Flash-Searcher by up to 17.06%; and (II) strong cross-task and cross-LLM generalization, designing memory architectures that transfer effectively across diverse benchmarks and backbone models.