< Explain other AI papers

VBench++: Comprehensive and Versatile Benchmark Suite for Video Generative Models

Ziqi Huang, Fan Zhang, Xiaojie Xu, Yinan He, Jiashuo Yu, Ziyue Dong, Qianli Ma, Nattapol Chanpaisit, Chenyang Si, Yuming Jiang, Yaohui Wang, Xinyuan Chen, Ying-Cong Chen, Limin Wang, Dahua Lin, Yu Qiao, Ziwei Liu

2024-11-21

VBench++: Comprehensive and Versatile Benchmark Suite for Video Generative Models

Summary

The VBench++ paper introduces a new and detailed system for evaluating video generation models, which helps to measure how well these models create videos that people find realistic and engaging.

What's the problem?

Evaluating the quality of generated videos is difficult because current methods do not accurately reflect how humans perceive video quality. This makes it hard to understand which models perform well and which do not, hindering progress in improving video generation technology.

What's the solution?

VBench++ solves this problem by breaking down video quality into 16 specific categories, such as how smoothly the motion looks and whether the characters in the video are consistent. It uses tailored prompts and methods for evaluation, including a dataset where people have rated different videos to ensure that the evaluations match human opinions. This comprehensive approach allows for a clearer understanding of each model's strengths and weaknesses.

Why it matters?

This work is important because it provides a more accurate way to assess video generation models, leading to better and more realistic video content. By aligning evaluations with human perceptions, VBench++ helps developers improve their models and pushes the boundaries of what video generation technology can achieve.

Abstract

Video generation has witnessed significant advancements, yet evaluating these models remains a challenge. A comprehensive evaluation benchmark for video generation is indispensable for two reasons: 1) Existing metrics do not fully align with human perceptions; 2) An ideal evaluation system should provide insights to inform future developments of video generation. To this end, we present VBench, a comprehensive benchmark suite that dissects "video generation quality" into specific, hierarchical, and disentangled dimensions, each with tailored prompts and evaluation methods. VBench has several appealing properties: 1) Comprehensive Dimensions: VBench comprises 16 dimensions in video generation (e.g., subject identity inconsistency, motion smoothness, temporal flickering, and spatial relationship, etc). The evaluation metrics with fine-grained levels reveal individual models' strengths and weaknesses. 2) Human Alignment: We also provide a dataset of human preference annotations to validate our benchmarks' alignment with human perception, for each evaluation dimension respectively. 3) Valuable Insights: We look into current models' ability across various evaluation dimensions, and various content types. We also investigate the gaps between video and image generation models. 4) Versatile Benchmarking: VBench++ supports evaluating text-to-video and image-to-video. We introduce a high-quality Image Suite with an adaptive aspect ratio to enable fair evaluations across different image-to-video generation settings. Beyond assessing technical quality, VBench++ evaluates the trustworthiness of video generative models, providing a more holistic view of model performance. 5) Full Open-Sourcing: We fully open-source VBench++ and continually add new video generation models to our leaderboard to drive forward the field of video generation.