< Explain other AI papers

Video Streaming Thinking: VideoLLMs Can Watch and Think Simultaneously

Yiran Guan, Liang Yin, Dingkang Liang, Jianzhong Ju, Zhenbo Luo, Jian Luan, Yuliang Liu, Xiang Bai

2026-03-16

Video Streaming Thinking: VideoLLMs Can Watch and Think Simultaneously

Summary

This paper introduces a new way for AI to understand videos in real-time, allowing it to respond to questions and interact with the video as it's playing, rather than needing to watch the whole thing first.

What's the problem?

Current AI systems that try to understand videos often struggle with responding quickly during live viewing. They either focus on just *seeing* what's happening, or they take too long to *think* about it, creating a delay. Simply speeding up the AI's processing isn't enough because it still takes too much time to reason about the video content.

What's the solution?

The researchers developed a system called Video Streaming Thinking (VST). This system lets the AI 'think while watching' by processing the video in smaller chunks and reasoning about each part as it comes in. They also created a special training process with two parts: one to get the AI used to understanding videos as a stream, and another to improve its ability to have conversations about the video. Finally, they built a way to automatically create training questions and answers based on what's happening in the video, making sure the AI pays attention to important details.

Why it matters?

This work is important because it makes real-time interaction with video-understanding AI possible. Imagine being able to ask an AI questions about a live sports game or a movie as you're watching it and getting instant, accurate answers. VST is significantly faster and more accurate than previous methods, paving the way for more responsive and helpful video-based AI applications.

Abstract

Online Video Large Language Models (VideoLLMs) play a critical role in supporting responsive, real-time interaction. Existing methods focus on streaming perception, lacking a synchronized logical reasoning stream. However, directly applying test-time scaling methods incurs unacceptable response latency. To address this trade-off, we propose Video Streaming Thinking (VST), a novel paradigm for streaming video understanding. It supports a thinking while watching mechanism, which activates reasoning over incoming video clips during streaming. This design improves timely comprehension and coherent cognition while preserving real-time responsiveness by amortizing LLM reasoning latency over video playback. Furthermore, we introduce a comprehensive post-training pipeline that integrates VST-SFT, which structurally adapts the offline VideoLLM to causal streaming reasoning, and VST-RL, which provides end-to-end improvement through self-exploration in a multi-turn video interaction environment. Additionally, we devise an automated training-data synthesis pipeline that uses video knowledge graphs to generate high-quality streaming QA pairs, with an entity-relation grounded streaming Chain-of-Thought to enforce multi-evidence reasoning and sustained attention to the video stream. Extensive evaluations show that VST-7B performs strongly on online benchmarks, e.g. 79.5% on StreamingBench and 59.3% on OVO-Bench. Meanwhile, VST remains competitive on offline long-form or reasoning benchmarks. Compared with Video-R1, VST responds 15.7 times faster and achieves +5.4% improvement on VideoHolmes, demonstrating higher efficiency and strong generalization across diverse video understanding tasks. Code, data, and models will be released at https://github.com/1ranGuan/VST.