MAGMA: A Multi-Graph based Agentic Memory Architecture for AI Agents
Dongming Jiang, Yi Li, Guanpeng Li, Bingzhe Li
2026-01-08
Summary
This paper introduces a new way to give large language models access to a lot of information, allowing them to reason through complex problems that require remembering details over a long period of time.
What's the problem?
Current methods for giving language models extra memory treat all the information as one big collection, finding relevant pieces based on how similar they are to the question being asked. This mixes up different kinds of information – like when things happened, why they happened, and who was involved – making it hard to understand *why* the model made a certain decision and limiting its accuracy when reasoning through long, complicated scenarios.
What's the solution?
The researchers created a system called MAGMA that organizes memory into separate 'graphs' focusing on meaning, time, cause-and-effect, and the entities involved. Instead of just searching for similarity, MAGMA uses a smart 'policy' to navigate these different graphs, picking and choosing information based on what the question specifically needs. This makes the reasoning process much clearer and gives more control over what information is used.
Why it matters?
This work is important because it improves the ability of language models to handle tasks that require long-term reasoning and understanding. By making the reasoning process more transparent and accurate, it helps build more reliable and trustworthy AI systems, especially for situations where understanding *how* a decision was made is crucial.
Abstract
Memory-Augmented Generation (MAG) extends Large Language Models with external memory to support long-context reasoning, but existing approaches largely rely on semantic similarity over monolithic memory stores, entangling temporal, causal, and entity information. This design limits interpretability and alignment between query intent and retrieved evidence, leading to suboptimal reasoning accuracy. In this paper, we propose MAGMA, a multi-graph agentic memory architecture that represents each memory item across orthogonal semantic, temporal, causal, and entity graphs. MAGMA formulates retrieval as policy-guided traversal over these relational views, enabling query-adaptive selection and structured context construction. By decoupling memory representation from retrieval logic, MAGMA provides transparent reasoning paths and fine-grained control over retrieval. Experiments on LoCoMo and LongMemEval demonstrate that MAGMA consistently outperforms state-of-the-art agentic memory systems in long-horizon reasoning tasks.