MMBench-Video: A Long-Form Multi-Shot Benchmark for Holistic Video Understanding
Xinyu Fang, Kangrui Mao, Haodong Duan, Xiangyu Zhao, Yining Li, Dahua Lin, Kai Chen
2024-06-21

Summary
This paper introduces MMBench-Video, a new benchmark designed to evaluate how well large vision-language models (LVLMs) understand long videos and answer questions about them.
What's the problem?
Existing benchmarks for video understanding mainly focus on short clips and often do not effectively assess how well models can comprehend the entire content of longer videos. This is a problem because real-world videos are usually longer and more complex, requiring models to have strong reasoning skills about what happens over time in the video.
What's the solution?
The researchers created MMBench-Video, which includes lengthy videos from YouTube and uses free-form questions that reflect real-life scenarios. The benchmark is carefully designed to test the models' ability to reason about events happening in the videos over time. All questions are human-annotated to ensure they are relevant and challenging. The evaluation process uses GPT-4 to automatically assess the models' responses, providing more accurate results than previous methods.
Why it matters?
This work is important because it provides a better way to evaluate how well AI models understand complex video content. By focusing on long videos and temporal reasoning, MMBench-Video can help improve the development of more capable models that can handle real-world video tasks, which is crucial for applications like video analysis, content creation, and interactive media.
Abstract
The advent of large vision-language models (LVLMs) has spurred research into their applications in multi-modal contexts, particularly in video understanding. Traditional VideoQA benchmarks, despite providing quantitative metrics, often fail to encompass the full spectrum of video content and inadequately assess models' temporal comprehension. To address these limitations, we introduce MMBench-Video, a quantitative benchmark designed to rigorously evaluate LVLMs' proficiency in video understanding. MMBench-Video incorporates lengthy videos from YouTube and employs free-form questions, mirroring practical use cases. The benchmark is meticulously crafted to probe the models' temporal reasoning skills, with all questions human-annotated according to a carefully constructed ability taxonomy. We employ GPT-4 for automated assessment, demonstrating superior accuracy and robustness over earlier LLM-based evaluations. Utilizing MMBench-Video, we have conducted comprehensive evaluations that include both proprietary and open-source LVLMs for images and videos. MMBench-Video stands as a valuable resource for the research community, facilitating improved evaluation of LVLMs and catalyzing progress in the field of video understanding. The evalutation code of MMBench-Video will be integrated into VLMEvalKit: https://github.com/open-compass/VLMEvalKit.