< Explain other AI papers

Memory Retrieval and Consolidation in Large Language Models through Function Tokens

Shaohua Zhang, Yuan Lin, Hang Li

2025-10-10

Memory Retrieval and Consolidation in Large Language Models through Function Tokens

Summary

This paper investigates how large language models, like the ones powering chatbots, actually *remember* and *use* information. These models are great at things like answering questions and following instructions because they store a lot of knowledge, but it's not clear how they access and learn that knowledge in the first place.

What's the problem?

Currently, we don't fully understand how large language models retrieve information they've learned (like recalling facts) or how they solidify that information during training. It's a bit of a black box – we know they work well, but not exactly *why* or *how* they work so well. The core issue is understanding the mechanisms behind memory in these complex AI systems.

What's the solution?

The researchers propose something called the 'function token hypothesis'. They suggest that certain words, like punctuation marks or common words like 'the' and 'and' (they call these 'function tokens'), are key to how the model works. During use, these function tokens activate the most relevant parts of the information the model has seen. During training, the model learns best when it's predicting the words that come *after* these function tokens, which helps it organize and store information effectively. They used detailed analysis of the model's inner workings and training process to support this idea.

Why it matters?

Understanding how these models store and retrieve information is crucial for improving them. If we can pinpoint the role of function tokens, we can potentially build more efficient, reliable, and knowledgeable AI systems. This could lead to better chatbots, more accurate translation tools, and advancements in many other areas that rely on language processing.

Abstract

The remarkable success of large language models (LLMs) stems from their ability to consolidate vast amounts of knowledge into the memory during pre-training and to retrieve it from the memory during inference, enabling advanced capabilities such as knowledge memorization, instruction-following and reasoning. However, the mechanisms of memory retrieval and consolidation in LLMs remain poorly understood. In this paper, we propose the function token hypothesis to explain the workings of LLMs: During inference, function tokens activate the most predictive features from context and govern next token prediction (memory retrieval). During pre-training, predicting the next tokens (usually content tokens) that follow function tokens increases the number of learned features of LLMs and updates the model parameters (memory consolidation). Function tokens here roughly correspond to function words in linguistics, including punctuation marks, articles, prepositions, and conjunctions, in contrast to content tokens. We provide extensive experimental evidence supporting this hypothesis. Using bipartite graph analysis, we show that a small number of function tokens activate the majority of features. Case studies further reveal how function tokens activate the most predictive features from context to direct next token prediction. We also find that during pre-training, the training loss is dominated by predicting the next content tokens following function tokens, which forces the function tokens to select the most predictive features from context.