LongVideoBench: A Benchmark for Long-context Interleaved Video-Language Understanding
Haoning Wu, Dongxu Li, Bei Chen, Junnan Li
2024-07-23

Summary
This paper introduces LongVideoBench, a new benchmark designed to test how well AI models understand and answer questions about long videos that include both visuals and subtitles. It aims to improve the evaluation of AI's ability to process complex video content.
What's the problem?
As AI models become more advanced, they need to handle longer and more detailed inputs, especially in video formats. However, there are very few public benchmarks available that can effectively measure how well these models understand long videos. This lack of evaluation tools makes it hard to compare different models and track their progress in video understanding.
What's the solution?
To address this issue, the authors created LongVideoBench, which includes 3,763 videos of varying lengths, along with their subtitles. They developed a new task called 'referring reasoning,' where questions require the model to refer back to specific parts of the video context. The benchmark includes 6,678 multiple-choice questions across various categories, allowing for a comprehensive evaluation of how well models can retrieve and reason about information from long videos.
Why it matters?
This research is significant because it provides a structured way to evaluate and improve AI models' abilities in understanding long-form video content. By establishing a benchmark like LongVideoBench, researchers can better assess the performance of different AI systems, leading to advancements in applications such as video summarization, interactive entertainment, and educational tools.
Abstract
Large multimodal models (LMMs) are processing increasingly longer and richer inputs. Albeit the progress, few public benchmark is available to measure such development. To mitigate this gap, we introduce LongVideoBench, a question-answering benchmark that features video-language interleaved inputs up to an hour long. Our benchmark includes 3,763 varying-length web-collected videos with their subtitles across diverse themes, designed to comprehensively evaluate LMMs on long-term multimodal understanding. To achieve this, we interpret the primary challenge as to accurately retrieve and reason over detailed multimodal information from long inputs. As such, we formulate a novel video question-answering task termed referring reasoning. Specifically, as part of the question, it contains a referring query that references related video contexts, called referred context. The model is then required to reason over relevant video details from the referred context. Following the paradigm of referring reasoning, we curate 6,678 human-annotated multiple-choice questions in 17 fine-grained categories, establishing one of the most comprehensive benchmarks for long-form video understanding. Evaluations suggest that the LongVideoBench presents significant challenges even for the most advanced proprietary models (e.g. GPT-4o, Gemini-1.5-Pro, GPT-4-Turbo), while their open-source counterparts show an even larger performance gap. In addition, our results indicate that model performance on the benchmark improves only when they are capable of processing more frames, positioning LongVideoBench as a valuable benchmark for evaluating future-generation long-context LMMs.