< Explain other AI papers

TVBench: Redesigning Video-Language Evaluation

Daniel Cores, Michael Dorkenwald, Manuel Mucientes, Cees G. M. Snoek, Yuki M. Asano

2024-10-15

TVBench: Redesigning Video-Language Evaluation

Summary

This paper introduces TVBench, a new benchmark designed to evaluate how well video-language models understand the timing and sequence of events in videos.

What's the problem?

Current benchmarks for evaluating video-language models often allow models to answer questions without really understanding the video content. Many existing tests rely too much on static images or prior knowledge, which means they don't effectively measure how well these models can interpret the temporal aspects of videos.

What's the solution?

TVBench addresses these issues by creating a multiple-choice question-answering system that specifically requires understanding of the timing and order of events in videos. The authors identified three main problems with existing benchmarks and designed TVBench to ensure that answering questions requires actual visual reasoning rather than just knowledge or static information. This new benchmark challenges models to demonstrate their ability to process temporal information effectively.

Why it matters?

This research is important because it improves the way we evaluate AI systems that work with videos. By focusing on temporal understanding, TVBench helps ensure that future models can better analyze and interpret video content, making them more useful for applications like video analysis, content creation, and interactive media.

Abstract

Large language models have demonstrated impressive performance when integrated with vision models even enabling video understanding. However, evaluating these video models presents its own unique challenges, for which several benchmarks have been proposed. In this paper, we show that the currently most used video-language benchmarks can be solved without requiring much temporal reasoning. We identified three main issues in existing datasets: (i) static information from single frames is often sufficient to solve the tasks (ii) the text of the questions and candidate answers is overly informative, allowing models to answer correctly without relying on any visual input (iii) world knowledge alone can answer many of the questions, making the benchmarks a test of knowledge replication rather than visual reasoning. In addition, we found that open-ended question-answering benchmarks for video understanding suffer from similar issues while the automatic evaluation process with LLMs is unreliable, making it an unsuitable alternative. As a solution, we propose TVBench, a novel open-source video multiple-choice question-answering benchmark, and demonstrate through extensive evaluations that it requires a high level of temporal understanding. Surprisingly, we find that most recent state-of-the-art video-language models perform similarly to random performance on TVBench, with only Gemini-Pro and Tarsier clearly surpassing this baseline.