General Agentic Memory Via Deep Research
B. Y. Yan, Chaofan Li, Hongjin Qian, Shuqi Lu, Zheng Liu
2025-11-25
Summary
This paper introduces a new way for AI agents to use memory, called General Agentic Memory (GAM), which aims to improve how AI remembers and uses past experiences to make better decisions.
What's the problem?
Current AI systems often rely on pre-built memories, but these are limited because it's impossible to predict everything an AI might need to remember beforehand, leading to important information being lost or unavailable when the AI needs it. Essentially, static memory isn't flexible enough for complex tasks.
What's the solution?
GAM works by taking a 'just-in-time' approach to memory, similar to how a computer compiles code when it's needed. It has two main parts: a 'Memorizer' that quickly identifies important past events and stores them in a simple way, and a 'Researcher' that digs into a more complete record of past events to find details relevant to the current situation. This allows the AI to focus on building the best context for a task *while* it's happening, rather than relying on a pre-defined memory.
Why it matters?
This research is important because it allows AI agents to be more adaptable and perform better on tasks that require remembering and using past experiences. By leveraging powerful language models and using reinforcement learning to refine the memory process, GAM can significantly improve an AI's ability to complete complex tasks and scale to handle more information.
Abstract
Memory is critical for AI agents, yet the widely-adopted static memory, aiming to create readily available memory in advance, is inevitably subject to severe information loss. To address this limitation, we propose a novel framework called general agentic memory (GAM). GAM follows the principle of "just-in time (JIT) compilation" where it focuses on creating optimized contexts for its client at runtime while keeping only simple but useful memory during the offline stage. To this end, GAM employs a duo-design with the following components. 1) Memorizer, which highlights key historical information using a lightweight memory, while maintaining complete historical information within a universal page-store. 2) Researcher, which retrieves and integrates useful information from the page-store for its online request guided by the pre-constructed memory. This design allows GAM to effectively leverage the agentic capabilities and test-time scalability of frontier large language models (LLMs), while also facilitating end-to-end performance optimization through reinforcement learning. In our experimental study, we demonstrate that GAM achieves substantial improvement on various memory-grounded task completion scenarios against existing memory systems.