ViSpeak: Visual Instruction Feedback in Streaming Videos
Shenghao Fu, Qize Yang, Yuan-Ming Li, Yi-Xing Peng, Kun-Yu Lin, Xihan Wei, Jian-Fang Hu, Xiaohua Xie, Wei-Shi Zheng
2025-03-20
Summary
This paper is about making AI understand and respond to instructions given through video, like waving your hand to get its attention.
What's the problem?
AI is good at understanding videos that have already been recorded, but it struggles with live video because it needs to react quickly to what's happening and understand instructions it sees.
What's the solution?
The researchers created a new AI model called ViSpeak that can understand what's happening in a live video stream and follow instructions given through gestures or other visual cues.
Why it matters?
This work matters because it can make AI more interactive and helpful in real-time situations, like controlling devices with gestures or getting assistance from a virtual assistant.
Abstract
Recent advances in Large Multi-modal Models (LMMs) are primarily focused on offline video understanding. Instead, streaming video understanding poses great challenges to recent models due to its time-sensitive, omni-modal and interactive characteristics. In this work, we aim to extend the streaming video understanding from a new perspective and propose a novel task named Visual Instruction Feedback in which models should be aware of visual contents and learn to extract instructions from them. For example, when users wave their hands to agents, agents should recognize the gesture and start conversations with welcome information. Thus, following instructions in visual modality greatly enhances user-agent interactions. To facilitate research, we define seven key subtasks highly relevant to visual modality and collect the ViSpeak-Instruct dataset for training and the ViSpeak-Bench for evaluation. Further, we propose the ViSpeak model, which is a SOTA streaming video understanding LMM with GPT-4o-level performance on various streaming video understanding benchmarks. After finetuning on our ViSpeak-Instruct dataset, ViSpeak is equipped with basic visual instruction feedback ability, serving as a solid baseline for future research.