Active Video Perception: Iterative Evidence Seeking for Agentic Long Video Understanding
Ziyang Wang, Honglu Zhou, Shijie Wang, Junnan Li, Caiming Xiong, Silvio Savarese, Mohit Bansal, Michael S. Ryoo, Juan Carlos Niebles
2025-12-08
Summary
This paper introduces a new approach to understanding long videos, focusing on how to efficiently find the important parts needed to answer specific questions.
What's the problem?
Currently, systems trying to understand long videos often waste time processing irrelevant information. They look at the entire video, even though only small, scattered moments actually contain the clues needed to answer a question. Existing methods use a general 'video summarizer' first, which can miss important details or blur the timing of events, making it harder to get accurate answers.
What's the solution?
The researchers developed a system called Active Video Perception (AVP) that works more like a person actively watching a video with a purpose. Instead of passively watching everything, AVP uses an 'agent' that plans what parts of the video to look at, observes those specific moments, and then decides if it has enough information to answer the question. This process repeats – plan, observe, reflect – until the agent is confident it has the answer. It directly analyzes the video pixels instead of relying on pre-made summaries.
Why it matters?
This research is important because it makes long video understanding much more efficient and accurate. AVP achieves better results than previous methods while using significantly less computing power and processing fewer video frames, meaning it can handle very long videos more effectively and potentially be used in real-world applications where time and resources are limited.
Abstract
Long video understanding (LVU) is challenging because answering real-world queries often depends on sparse, temporally dispersed cues buried in hours of mostly redundant and irrelevant content. While agentic pipelines improve video reasoning capabilities, prevailing frameworks rely on a query-agnostic captioner to perceive video information, which wastes computation on irrelevant content and blurs fine-grained temporal and spatial information. Motivated by active perception theory, we argue that LVU agents should actively decide what, when, and where to observe, and continuously assess whether the current observation is sufficient to answer the query. We present Active Video Perception (AVP), an evidence-seeking framework that treats the video as an interactive environment and acquires compact, queryrelevant evidence directly from pixels. Concretely, AVP runs an iterative plan-observe-reflect process with MLLM agents. In each round, a planner proposes targeted video interactions, an observer executes them to extract time-stamped evidence, and a reflector evaluates the sufficiency of the evidence for the query, either halting with an answer or triggering further observation. Across five LVU benchmarks, AVP achieves highest performance with significant improvements. Notably, AVP outperforms the best agentic method by 5.7% in average accuracy while only requires 18.4% inference time and 12.4% input tokens.