MVU-Eval: Towards Multi-Video Understanding Evaluation for Multimodal LLMs
Tianhao Peng, Haochen Wang, Yuanxing Zhang, Zekun Wang, Zili Wang, Ge Zhang, Jian Yang, Shihao Li, Yanghai Wang, Xintao Wang, Houyi Li, Wei Ji, Pengfei Wan, Wenhao Huang, Zhaoxiang Zhang, Jiaheng Liu
2025-11-11
Summary
This paper introduces a new way to test how well artificial intelligence models that can 'see' and 'understand' images and videos can process information from *multiple* videos at once.
What's the problem?
Current tests for these AI models, called Multimodal Large Language Models, only look at a single video. But in the real world, understanding often requires combining information from different viewpoints or over time – think about analyzing a sports game from multiple camera angles or a self-driving car using data from several sensors. Existing benchmarks don't evaluate this crucial ability to understand across multiple videos.
What's the solution?
The researchers created a new benchmark called MVU-Eval. This benchmark includes nearly 5,000 videos and over 1,800 questions designed to test eight different skills related to understanding multiple videos, ranging from basic visual recognition to more complex reasoning. They then tested several existing AI models using this new benchmark.
Why it matters?
This work is important because it highlights that current AI models struggle with understanding information spread across multiple videos. By providing a public benchmark, the researchers hope to encourage the development of better AI systems that can handle the complexities of real-world scenarios where combining information from various visual sources is essential.
Abstract
The advent of Multimodal Large Language Models (MLLMs) has expanded AI capabilities to visual modalities, yet existing evaluation benchmarks remain limited to single-video understanding, overlooking the critical need for multi-video understanding in real-world scenarios (e.g., sports analytics and autonomous driving). To address this significant gap, we introduce MVU-Eval, the first comprehensive benchmark for evaluating Multi-Video Understanding for MLLMs. Specifically, our MVU-Eval mainly assesses eight core competencies through 1,824 meticulously curated question-answer pairs spanning 4,959 videos from diverse domains, addressing both fundamental perception tasks and high-order reasoning tasks. These capabilities are rigorously aligned with real-world applications such as multi-sensor synthesis in autonomous systems and cross-angle sports analytics. Through extensive evaluation of state-of-the-art open-source and closed-source models, we reveal significant performance discrepancies and limitations in current MLLMs' ability to perform understanding across multiple videos. The benchmark will be made publicly available to foster future research.