HERMES: KV Cache as Hierarchical Memory for Efficient Streaming Video Understanding
Haowei Zhang, Shudong Yang, Jinlan Fu, See-Kiong Ng, Xipeng Qiu
2026-01-23
Summary
This paper introduces HERMES, a new system designed to help computers understand videos as they are being recorded, not just after the whole video is finished.
What's the problem?
Current AI models that understand videos are really good when you give them the entire video at once, but they struggle when trying to understand a live video stream. This is because they need to be fast, keep up with the constant flow of information, and not use up too much computer memory, all at the same time. Existing models can't efficiently do all three.
What's the solution?
The researchers came up with HERMES, which works by cleverly managing the computer's memory while watching the video. They realized the memory used to process the video, called the KV cache, can be organized in a way that remembers important parts of the video at different levels of detail. HERMES reuses this memory efficiently, allowing it to understand the video quickly and with limited resources, and it doesn't need to do extra calculations when you ask it a question about the video.
Why it matters?
HERMES is important because it makes real-time video understanding much faster and more efficient. It's ten times faster at responding to questions than previous methods and can even understand videos with less information without losing accuracy. This could be useful for things like live sports analysis, security systems, or interactive video experiences where the AI needs to react instantly.
Abstract
Recent advancements in Multimodal Large Language Models (MLLMs) have demonstrated significant improvement in offline video understanding. However, extending these capabilities to streaming video inputs, remains challenging, as existing models struggle to simultaneously maintain stable understanding performance, real-time responses, and low GPU memory overhead. To address this challenge, we propose HERMES, a novel training-free architecture for real-time and accurate understanding of video streams. Based on a mechanistic attention investigation, we conceptualize KV cache as a hierarchical memory framework that encapsulates video information across multiple granularities. During inference, HERMES reuses a compact KV cache, enabling efficient streaming understanding under resource constraints. Notably, HERMES requires no auxiliary computations upon the arrival of user queries, thereby guaranteeing real-time responses for continuous video stream interactions, which achieves 10times faster TTFT compared to prior SOTA. Even when reducing video tokens by up to 68% compared with uniform sampling, HERMES achieves superior or comparable accuracy across all benchmarks, with up to 11.4% gains on streaming datasets.