< Explain other AI papers

VCR-Bench: A Comprehensive Evaluation Framework for Video Chain-of-Thought Reasoning

Yukun Qi, Yiming Zhao, Yu Zeng, Xikun Bao, Wenxuan Huang, Lin Chen, Zehui Chen, Jie Zhao, Zhongang Qi, Feng Zhao

2025-04-11

VCR-Bench: A Comprehensive Evaluation Framework for Video
  Chain-of-Thought Reasoning

Summary

This paper introduces VCR-Bench, a new tool designed to thoroughly test how well advanced AI models can reason through videos step by step, just like showing their work in math class. VCR-Bench provides a big collection of videos and questions, each with detailed explanations for every reasoning step, so researchers can see exactly where an AI model might struggle.

What's the problem?

The main issue is that while AI models have gotten better at explaining their thought process for text and images, there hasn't been a good way to measure how well they reason through videos. Existing tests don't show if a model's mistakes come from not understanding what it sees, or from not being able to think through the problem logically.

What's the solution?

To solve this, the authors created VCR-Bench, which includes hundreds of videos and over a thousand question-answer pairs, each with step-by-step explanations that are labeled to show if the step is about understanding the video or about reasoning. They also invented a new scoring system, called the CoT score, that measures how well a model follows the reasoning process from start to finish.

Why it matters?

This work is important because it finally gives researchers a way to see exactly where AI models need to improve when it comes to understanding and reasoning about videos. By using VCR-Bench, developers can build smarter, more reliable AI that can explain its answers and handle more complex video-related tasks, which is useful for everything from education to entertainment.

Abstract

The advancement of Chain-of-Thought (CoT) reasoning has significantly enhanced the capabilities of large language models (LLMs) and large vision-language models (LVLMs). However, a rigorous evaluation framework for video CoT reasoning remains absent. Current video benchmarks fail to adequately assess the reasoning process and expose whether failures stem from deficiencies in perception or reasoning capabilities. Therefore, we introduce VCR-Bench, a novel benchmark designed to comprehensively evaluate LVLMs' Video Chain-of-Thought Reasoning capabilities. VCR-Bench comprises 859 videos spanning a variety of video content and durations, along with 1,034 high-quality question-answer pairs. Each pair is manually annotated with a stepwise CoT rationale, where every step is tagged to indicate its association with the perception or reasoning capabilities. Furthermore, we design seven distinct task dimensions and propose the CoT score to assess the entire CoT process based on the stepwise tagged CoT rationals. Extensive experiments on VCR-Bench highlight substantial limitations in current LVLMs. Even the top-performing model, o1, only achieves a 62.8% CoT score and an 56.7% accuracy, while most models score below 40%. Experiments show most models score lower on perception than reasoning steps, revealing LVLMs' key bottleneck in temporal-spatial information processing for complex video reasoning. A robust positive correlation between the CoT score and accuracy confirms the validity of our evaluation framework and underscores the critical role of CoT reasoning in solving complex video reasoning tasks. We hope VCR-Bench to serve as a standardized evaluation framework and expose the actual drawbacks in complex video reasoning task.