< Explain other AI papers

Video-MTR: Reinforced Multi-Turn Reasoning for Long Video Understanding

Yuan Xie, Tianshui Chen, Zheng Ge, Lionel Ni

2025-09-05

Video-MTR: Reinforced Multi-Turn Reasoning for Long Video Understanding

Summary

This paper introduces a new approach, called Video-MTR, for helping computers understand what's happening in long videos and answer questions about them.

What's the problem?

Understanding long videos is hard for computers because things happen over time and are connected, and current methods either don't fully grasp these connections or rely on complicated systems that aren't trained all at once. Existing systems often make a single guess about the video's content, or they use separate 'visual-language' tools that aren't perfectly integrated, leading to less accurate results and slower processing.

What's the solution?

Video-MTR works by repeatedly watching parts of the video and refining its understanding. It doesn't just look once and guess; it goes through multiple 'turns' of reasoning, each time selecting the most relevant video segments based on what it's learned so far and the question being asked. To guide this process, the system uses a special reward system that encourages it to pick useful video clips and understand the question well, all without needing those external tools. This allows the entire system to be trained together for better performance.

Why it matters?

This research is important because it improves the accuracy and speed of video understanding, pushing the boundaries of what computers can achieve in this area. By creating a system that can reason through videos step-by-step and learn everything at once, it opens the door to more sophisticated video analysis and applications.

Abstract

Long-form video understanding, characterized by long-range temporal dependencies and multiple events, remains a challenge. Existing methods often rely on static reasoning or external visual-language models (VLMs), which face issues like complexity and sub-optimal performance due to the lack of end-to-end training. In this paper, we propose Video-MTR, a reinforced multi-turn reasoning framework designed to enable iterative key video segment selection and question comprehension. Unlike traditional video reasoning pipeline, which generate predictions in a single turn, Video-MTR performs reasoning in multiple turns, selecting video segments progressively based on the evolving understanding of previously processed segments and the current question. This iterative process allows for a more refined and contextually aware analysis of the video. To ensure intermediate reasoning process, we introduce a novel gated bi-level reward system, combining trajectory-level rewards based on answer correctness and turn-level rewards emphasizing frame-query relevance. This system optimizes both video segment selection and question comprehension, eliminating the need for external VLMs and allowing end-to-end training. Extensive experiments on benchmarks like VideoMME, MLVU, and EgoSchema demonstrate that Video-MTR outperforms existing methods in both accuracy and efficiency, advancing the state-of-the-art in long video understanding.