< Explain other AI papers

AI Meets Brain: Memory Systems from Cognitive Neuroscience to Autonomous Agents

Jiafeng Liang, Hao Li, Chang Li, Jiaqi Zhou, Shixin Jiang, Zekun Wang, Changkai Ji, Zhihao Zhu, Runxuan Liu, Tao Ren, Jinlan Fu, See-Kiong Ng, Xia Liang, Ming Liu, Bing Qin

2026-01-01

AI Meets Brain: Memory Systems from Cognitive Neuroscience to Autonomous Agents

Summary

This paper explores how we can make the 'memory' of artificial intelligence (AI) systems, specifically those powered by large language models (LLMs), work more like human memory. It aims to improve how AI agents learn and use past experiences to solve problems.

What's the problem?

Current AI agents struggle to effectively use memory because the way their memory is designed doesn't really match how human memory works. Researchers in AI and neuroscience study memory, but they don't always connect their findings. This disconnect limits the potential for creating truly intelligent AI that can learn and adapt like people do. Existing AI memory systems often lack the complexity and efficiency of the human brain.

What's the solution?

The researchers did a deep dive into both cognitive neuroscience (the study of how the brain works) and the way memory is used in LLM-based AI agents. They compared and contrasted different types of memory, how information is stored, and how memory is managed in both humans and AI. They also looked at how we currently test AI memory and considered the security risks associated with AI remembering things. Finally, they suggest future areas of research, like building AI systems that can handle multiple types of information (like images and text) in their memory and learn new skills more effectively.

Why it matters?

This work is important because better AI memory means AI agents can become much more capable. If AI can remember and learn from experiences more like humans, it can handle complex tasks, make better decisions, and be more reliable. This has implications for many fields, including robotics, virtual assistants, and any application where AI needs to interact with the real world and learn over time.

Abstract

Memory serves as the pivotal nexus bridging past and future, providing both humans and AI systems with invaluable concepts and experience to navigate complex tasks. Recent research on autonomous agents has increasingly focused on designing efficient memory workflows by drawing on cognitive neuroscience. However, constrained by interdisciplinary barriers, existing works struggle to assimilate the essence of human memory mechanisms. To bridge this gap, we systematically synthesizes interdisciplinary knowledge of memory, connecting insights from cognitive neuroscience with LLM-driven agents. Specifically, we first elucidate the definition and function of memory along a progressive trajectory from cognitive neuroscience through LLMs to agents. We then provide a comparative analysis of memory taxonomy, storage mechanisms, and the complete management lifecycle from both biological and artificial perspectives. Subsequently, we review the mainstream benchmarks for evaluating agent memory. Additionally, we explore memory security from dual perspectives of attack and defense. Finally, we envision future research directions, with a focus on multimodal memory systems and skill acquisition.