LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory
Di Wu, Hongwei Wang, Wenhao Yu, Yuwei Zhang, Kai-Wei Chang, Dong Yu
2024-10-15

Summary
This paper introduces LongMemEval, a new benchmark designed to test how well chat assistants can remember and use information over long conversations with users.
What's the problem?
While chat assistants have started using memory to track past interactions, their ability to remember details during long-term conversations is still not well understood. Current benchmarks do not effectively evaluate how these systems manage information over time, leading to gaps in their performance.
What's the solution?
LongMemEval provides a structured way to assess five key memory abilities of chat assistants: extracting information, reasoning across multiple sessions, understanding the timing of events, updating knowledge, and knowing when not to answer. It includes 500 carefully crafted questions based on user-assistant chat histories, which challenge existing systems and reveal their limitations in retaining information during extended interactions. Additionally, the paper proposes a framework for improving memory design in chat assistants by optimizing how they store and retrieve information.
Why it matters?
This research is important because it helps improve the performance of AI chat assistants in real-world applications where users expect personalized and context-aware responses. By focusing on long-term memory capabilities, LongMemEval can lead to the development of more effective and reliable conversational AI systems.
Abstract
Recent large language model (LLM)-driven chat assistant systems have integrated memory components to track user-assistant chat histories, enabling more accurate and personalized responses. However, their long-term memory capabilities in sustained interactions remain underexplored. This paper introduces LongMemEval, a comprehensive benchmark designed to evaluate five core long-term memory abilities of chat assistants: information extraction, multi-session reasoning, temporal reasoning, knowledge updates, and abstention. With 500 meticulously curated questions embedded within freely scalable user-assistant chat histories, LongMemEval presents a significant challenge to existing long-term memory systems, with commercial chat assistants and long-context LLMs showing 30% accuracy drop on memorizing information across sustained interactions. We then present a unified framework that breaks down the long-term memory design into four design choices across the indexing, retrieval, and reading stages. Built upon key experimental insights, we propose several memory designs including session decomposition for optimizing value granularity, fact-augmented key expansion for enhancing the index structure, and time-aware query expansion for refining the search scope. Experiment results show that these optimizations greatly improve both memory recall and downstream question answering on LongMemEval. Overall, our study provides valuable resources and guidance for advancing the long-term memory capabilities of LLM-based chat assistants, paving the way toward more personalized and reliable conversational AI.