< Explain other AI papers

Benchmarking Scientific Understanding and Reasoning for Video Generation using VideoScience-Bench

Lanxiang Hu, Abhilash Shankarampeta, Yixin Huang, Zilin Dai, Haoyang Yu, Yujie Zhao, Haoqiang Kang, Daniel Zhao, Tajana Rosing, Hao Zhang

2025-12-03

Benchmarking Scientific Understanding and Reasoning for Video Generation using VideoScience-Bench

Summary

This paper introduces a new way to test how well AI models understand and can *show* scientific concepts in videos, going beyond just creating realistic-looking motion.

What's the problem?

Current tests for video-generating AI focus on whether the videos look physically plausible – like a ball bouncing realistically. However, these tests don’t really check if the AI *understands* the underlying science, like gravity or chemical reactions. They don't see if the AI can reason about *why* things happen, just that they *appear* to happen correctly.

What's the solution?

The researchers created a benchmark called VideoScience-Bench. This benchmark gives AI models prompts that require them to demonstrate understanding of physics and chemistry concepts at a college introductory level. The AI has to generate videos that accurately show what would happen in a scientific scenario, and these videos are then judged on things like whether they follow the prompt, show the correct physical behavior, and maintain consistency throughout. They also used another AI to help judge the videos, and found it agreed well with human judges.

Why it matters?

This work is important because it pushes AI video generation beyond just making things *look* real to making things *scientifically* accurate. It’s a step towards AI that can not only create videos, but also demonstrate a real understanding of the world around us, which could be useful for education, scientific visualization, and more.

Abstract

The next frontier for video generation lies in developing models capable of zero-shot reasoning, where understanding real-world scientific laws is crucial for accurate physical outcome modeling under diverse conditions. However, existing video benchmarks are physical commonsense-based, offering limited insight into video models' scientific reasoning capability. We introduce VideoScience-Bench, a benchmark designed to evaluate undergraduate-level scientific understanding in video models. Each prompt encodes a composite scientific scenario that requires understanding and reasoning across multiple scientific concepts to generate the correct phenomenon. The benchmark comprises 200 carefully curated prompts spanning 14 topics and 103 concepts in physics and chemistry. We conduct expert-annotated evaluations across seven state-of-the-art video models in T2V and I2V settings along five dimensions: Prompt Consistency, Phenomenon Congruency, Correct Dynamism, Immutability, and Spatio-Temporal Continuity. Using a VLM-as-a-Judge to assess video generations, we observe strong correlation with human assessments. To the best of our knowledge, VideoScience-Bench is the first benchmark to evaluate video models not only as generators but also as reasoners, requiring their generations to demonstrate scientific understanding consistent with expected physical and chemical phenomena. Our data and evaluation code are available at: https://github.com/hao-ai-lab/VideoScience{github.com/hao-ai-lab/VideoScience}.