< Explain other AI papers

Controllable Memory Usage: Balancing Anchoring and Innovation in Long-Term Human-Agent Interaction

Muzhao Tian, Zisu Huang, Xiaohua Wang, Jingwen Xu, Zhengkang Guo, Qi Qian, Yuanzhe Shen, Kaitao Song, Jiakang Yuan, Changze Lv, Xiaoqing Zheng

2026-01-13

Controllable Memory Usage: Balancing Anchoring and Innovation in Long-Term Human-Agent Interaction

Summary

This paper explores how to make AI agents that can have long conversations with people while remembering past interactions, but without getting stuck repeating themselves or forgetting important details.

What's the problem?

When AI agents try to remember everything from past conversations, they can get 'stuck' and just repeat what they've already said, a problem called 'Memory Anchoring'. On the other hand, if they don't remember anything, they can seem clueless and unable to build on previous discussions. It's hard to find the right balance between remembering enough and not getting overly influenced by the past.

What's the solution?

The researchers created a system called SteeM that lets users control *how much* the AI agent relies on its memory. Think of it like a dial: you can turn it up to make the agent closely follow the conversation history, or turn it down to encourage more creative and new responses. They developed a way to measure how much an agent depends on its memory and showed that SteeM works better than simply including all or none of the past conversation.

Why it matters?

This research is important because it makes AI agents more flexible and useful for long-term interactions. By giving users control over the agent's memory, we can create AI companions that are both consistent and adaptable, leading to more natural and effective conversations and personalized experiences.

Abstract

As LLM-based agents are increasingly used in long-term interactions, cumulative memory is critical for enabling personalization and maintaining stylistic consistency. However, most existing systems adopt an ``all-or-nothing'' approach to memory usage: incorporating all relevant past information can lead to Memory Anchoring, where the agent is trapped by past interactions, while excluding memory entirely results in under-utilization and the loss of important interaction history. We show that an agent's reliance on memory can be modeled as an explicit and user-controllable dimension. We first introduce a behavioral metric of memory dependence to quantify the influence of past interactions on current outputs. We then propose Steerable Memory Agent, SteeM, a framework that allows users to dynamically regulate memory reliance, ranging from a fresh-start mode that promotes innovation to a high-fidelity mode that closely follows interaction history. Experiments across different scenarios demonstrate that our approach consistently outperforms conventional prompting and rigid memory masking strategies, yielding a more nuanced and effective control for personalized human-agent collaboration.