VideoAutoArena: An Automated Arena for Evaluating Large Multimodal Models in Video Analysis through User Simulation
Ziyang Luo, Haoning Wu, Dongxu Li, Jing Ma, Mohan Kankanhalli, Junnan Li
2024-11-21

Summary
This paper presents VideoAutoArena, a new automated system designed to evaluate large multimodal models (LMMs) that analyze videos, using user simulations to create adaptive questions for better assessment.
What's the problem?
Evaluating how well LMMs understand and analyze videos has been challenging because traditional methods, like multiple-choice questions, often fail to capture the complex needs of real users. Additionally, getting humans to annotate videos for evaluation is slow and expensive.
What's the solution?
To tackle these issues, VideoAutoArena uses a user simulation approach to automatically generate open-ended questions that test the models' understanding of videos in a more realistic way. It features an automated evaluation framework that continuously compares different models using a modified ELO rating system. The system also includes a strategy that gradually increases question difficulty based on model performance, ensuring rigorous testing. Furthermore, it aligns closely with human judgment by validating its automated assessments against a curated set of human annotations.
Why it matters?
This research is important because it provides a more effective and scalable way to evaluate video analysis models, which can lead to better performance in real-world applications. By using user simulations and adaptive questioning, VideoAutoArena helps bridge the gap between technical evaluations and practical user needs, ultimately improving how AI systems understand video content.
Abstract
Large multimodal models (LMMs) with advanced video analysis capabilities have recently garnered significant attention. However, most evaluations rely on traditional methods like multiple-choice questions in benchmarks such as VideoMME and LongVideoBench, which are prone to lack the depth needed to capture the complex demands of real-world users. To address this limitation-and due to the prohibitive cost and slow pace of human annotation for video tasks-we introduce VideoAutoArena, an arena-style benchmark inspired by LMSYS Chatbot Arena's framework, designed to automatically assess LMMs' video analysis abilities. VideoAutoArena utilizes user simulation to generate open-ended, adaptive questions that rigorously assess model performance in video understanding. The benchmark features an automated, scalable evaluation framework, incorporating a modified ELO Rating System for fair and continuous comparisons across multiple LMMs. To validate our automated judging system, we construct a 'gold standard' using a carefully curated subset of human annotations, demonstrating that our arena strongly aligns with human judgment while maintaining scalability. Additionally, we introduce a fault-driven evolution strategy, progressively increasing question complexity to push models toward handling more challenging video analysis scenarios. Experimental results demonstrate that VideoAutoArena effectively differentiates among state-of-the-art LMMs, providing insights into model strengths and areas for improvement. To further streamline our evaluation, we introduce VideoAutoBench as an auxiliary benchmark, where human annotators label winners in a subset of VideoAutoArena battles. We use GPT-4o as a judge to compare responses against these human-validated answers. Together, VideoAutoArena and VideoAutoBench offer a cost-effective, and scalable framework for evaluating LMMs in user-centric video analysis.