< Explain other AI papers

Reasoning via Video: The First Evaluation of Video Models' Reasoning Abilities through Maze-Solving Tasks

Cheng Yang, Haiyuan Wan, Yiran Peng, Xin Cheng, Zhaoyang Yu, Jiayi Zhang, Junchi Yu, Xinlei Yu, Xiawu Zheng, Dongzhan Zhou, Chenglin Wu

2025-11-20

Reasoning via Video: The First Evaluation of Video Models' Reasoning Abilities through Maze-Solving Tasks

Summary

This paper investigates whether video generation models can actually *think* by reasoning through problems, similar to how large language models can reason with text. It focuses on spatial reasoning, which is figuring things out based on where things are, and argues that video is a great way to test this because it naturally shows how things move and relate to each other in space and time.

What's the problem?

While video models are getting really good at creating realistic videos, it's unclear if they're just mimicking patterns or if they can actually understand and solve problems. Existing benchmarks don't really test a video model's ability to plan and reason about spatial relationships. The core issue is determining if a video model can do more than just generate visually appealing content; can it demonstrate understanding?

What's the solution?

The researchers created a new benchmark called VR-Bench, which consists of almost 8,000 videos of mazes. These mazes are designed to require spatial planning and step-by-step thinking to solve. They then tested existing video models on these mazes, finding that a technique called 'SFT' helped the models perform better. They also discovered that generating multiple possible solutions during testing, instead of just one, improved the reliability of the results.

Why it matters?

This work shows that video models have a unique advantage over text-based models when it comes to spatial reasoning. Because video inherently contains information about space and time, these models can potentially become very good at solving problems that require understanding physical layouts and movement. This could be important for things like robotics, self-driving cars, and even helping computers understand the real world better.

Abstract

Video Models have achieved remarkable success in high-fidelity video generation with coherent motion dynamics. Analogous to the development from text generation to text-based reasoning in language modeling, the development of video models motivates us to ask: Can video models reason via video generation? Compared with the discrete text corpus, video grounds reasoning in explicit spatial layouts and temporal continuity, which serves as an ideal substrate for spatial reasoning. In this work, we explore the reasoning via video paradigm and introduce VR-Bench -- a comprehensive benchmark designed to systematically evaluate video models' reasoning capabilities. Grounded in maze-solving tasks that inherently require spatial planning and multi-step reasoning, VR-Bench contains 7,920 procedurally generated videos across five maze types and diverse visual styles. Our empirical analysis demonstrates that SFT can efficiently elicit the reasoning ability of video model. Video models exhibit stronger spatial perception during reasoning, outperforming leading VLMs and generalizing well across diverse scenarios, tasks, and levels of complexity. We further discover a test-time scaling effect, where diverse sampling during inference improves reasoning reliability by 10--20%. These findings highlight the unique potential and scalability of reasoning via video for spatial reasoning tasks.