REVISOR: Beyond Textual Reflection, Towards Multimodal Introspective Reasoning in Long-Form Video Understanding
Jiaze Li, Hao Yin, Wenhui Tan, Jingyang Chen, Boshen Xu, Yuxun Qu, Yijing Chen, Jianzhong Ju, Zhenbo Luo, Jian Luan
2025-11-19
Summary
This paper introduces a new method, REVISOR, to help AI models understand long videos better by allowing them to 'think' about both the words and the visuals they're seeing.
What's the problem?
Current AI systems that 'think' to answer questions about videos mostly focus on the text associated with the video, like captions. This works okay for short clips, but when dealing with long videos, just rethinking the text isn't enough because important information is often in the visuals themselves. Also, these systems struggle to connect what they 'read' with what they 'see' to form a complete understanding.
What's the solution?
The researchers created REVISOR, a system that lets the AI model consider both the text and specific parts of the video when it's trying to figure out an answer. It's like the AI can rewind and re-watch important scenes while it's thinking. They also developed a special reward system, called DADR, to make sure the AI focuses on the *right* parts of the video that actually help it answer the question, and that its reasoning is clearly linked to the visual evidence it's using.
Why it matters?
This is important because it allows AI to understand complex videos, like lectures or movies, much more effectively without needing a ton of extra training data or relying on other complicated AI tools. It improves the AI's ability to reason about what's happening in the video and provide more accurate answers.
Abstract
Self-reflection mechanisms that rely on purely text-based rethinking processes perform well in most multimodal tasks. However, when directly applied to long-form video understanding scenarios, they exhibit clear limitations. The fundamental reasons for this lie in two points: (1)long-form video understanding involves richer and more dynamic visual input, meaning rethinking only the text information is insufficient and necessitates a further rethinking process specifically targeting visual information; (2) purely text-based reflection mechanisms lack cross-modal interaction capabilities, preventing them from fully integrating visual information during reflection. Motivated by these insights, we propose REVISOR (REflective VIsual Segment Oriented Reasoning), a novel framework for tool-augmented multimodal reflection. REVISOR enables MLLMs to collaboratively construct introspective reflection processes across textual and visual modalities, significantly enhancing their reasoning capability for long-form video understanding. To ensure that REVISOR can learn to accurately review video segments highly relevant to the question during reinforcement learning, we designed the Dual Attribution Decoupled Reward (DADR) mechanism. Integrated into the GRPO training strategy, this mechanism enforces causal alignment between the model's reasoning and the selected video evidence. Notably, the REVISOR framework significantly enhances long-form video understanding capability of MLLMs without requiring supplementary supervised fine-tuning or external models, achieving impressive results on four benchmarks including VideoMME, LongVideoBench, MLVU, and LVBench.