VisMem: Latent Vision Memory Unlocks Potential of Vision-Language Models
Xinlei Yu, Chengming Xu, Guibin Zhang, Zhangquan Chen, Yudong Zhang, Yongbo He, Peng-Tao Jiang, Jiangning Zhang, Xiaobin Hu, Shuicheng Yan
2025-11-24
Summary
This paper introduces a new way to improve how well Vision-Language Models, which are AI systems that understand both images and text, perform on complicated tasks involving visuals.
What's the problem?
Current Vision-Language Models often struggle with tasks that require them to 'remember' details from an image over a longer period of time, especially when they're generating descriptions or answering questions. They tend to lose track of the visual information and make mistakes because they don't maintain a strong connection to what they're 'seeing'. This is like having a short attention span when looking at an image.
What's the solution?
The researchers created a system called VisMem, inspired by how human memory works. VisMem gives the AI two types of 'memory': a short-term memory for remembering fine details of the image, and a long-term memory for understanding the overall meaning. The AI can then use both memories while it's working on a task, allowing it to stay focused on the visual details and maintain a consistent understanding of the image. It's like giving the AI both a notepad for quick notes and a textbook for broader concepts.
Why it matters?
VisMem significantly improves the performance of Vision-Language Models, boosting accuracy by an average of almost 12% compared to existing methods. This is a big step forward because it allows these AI systems to be more reliable and useful for tasks like image understanding, reasoning about visuals, and generating detailed descriptions. It sets a new standard for improving how these models use and remember visual information.
Abstract
Despite the remarkable success of Vision-Language Models (VLMs), their performance on a range of complex visual tasks is often hindered by a "visual processing bottleneck": a propensity to lose grounding in visual evidence and exhibit a deficit in contextualized visual experience during prolonged generation. Drawing inspiration from human cognitive memory theory, which distinguishes short-term visually-dominant memory and long-term semantically-dominant memory, we propose VisMem, a cognitively-aligned framework that equips VLMs with dynamic latent vision memories, a short-term module for fine-grained perceptual retention and a long-term module for abstract semantic consolidation. These memories are seamlessly invoked during inference, allowing VLMs to maintain both perceptual fidelity and semantic consistency across thinking and generation. Extensive experiments across diverse visual benchmarks for understanding, reasoning, and generation reveal that VisMem delivers a significant average performance boost of 11.8% relative to the vanilla model and outperforms all counterparts, establishing a new paradigm for latent-space memory enhancement. The code will be available: https://github.com/YU-deep/VisMem.git.