< Explain other AI papers

WorldSimBench: Towards Video Generation Models as World Simulators

Yiran Qin, Zhelun Shi, Jiwen Yu, Xijun Wang, Enshen Zhou, Lijun Li, Zhenfei Yin, Xihui Liu, Lu Sheng, Jing Shao, Lei Bai, Wanli Ouyang, Ruimao Zhang

2024-10-24

WorldSimBench: Towards Video Generation Models as World Simulators

Summary

This paper introduces WorldSimBench, a new framework designed to evaluate video generation models by simulating real-world scenarios and assessing their performance in understanding and generating videos.

What's the problem?

Current video generation models struggle to accurately predict and simulate complex scenes due to a lack of structured evaluation methods. Existing benchmarks do not effectively measure how well these models can understand and interact with the world, especially in dynamic environments where actions and visual details are critical.

What's the solution?

The authors propose WorldSimBench, which includes two main evaluation methods: Explicit Perceptual Evaluation and Implicit Manipulative Evaluation. The first method uses human feedback to assess the visual quality of generated videos, while the second method tests whether the models can translate video content into appropriate actions in real-time tasks. They also created a dataset called the HF-Embodied Dataset, which contains detailed human feedback to train a model that evaluates video quality based on human preferences.

Why it matters?

This research is important because it sets a new standard for evaluating video generation models, helping to improve their ability to simulate real-world situations. By providing better benchmarks, WorldSimBench can drive innovation in AI technologies, leading to more capable systems that can assist in fields like robotics, autonomous driving, and interactive entertainment.

Abstract

Recent advancements in predictive models have demonstrated exceptional capabilities in predicting the future state of objects and scenes. However, the lack of categorization based on inherent characteristics continues to hinder the progress of predictive model development. Additionally, existing benchmarks are unable to effectively evaluate higher-capability, highly embodied predictive models from an embodied perspective. In this work, we classify the functionalities of predictive models into a hierarchy and take the first step in evaluating World Simulators by proposing a dual evaluation framework called WorldSimBench. WorldSimBench includes Explicit Perceptual Evaluation and Implicit Manipulative Evaluation, encompassing human preference assessments from the visual perspective and action-level evaluations in embodied tasks, covering three representative embodied scenarios: Open-Ended Embodied Environment, Autonomous, Driving, and Robot Manipulation. In the Explicit Perceptual Evaluation, we introduce the HF-Embodied Dataset, a video assessment dataset based on fine-grained human feedback, which we use to train a Human Preference Evaluator that aligns with human perception and explicitly assesses the visual fidelity of World Simulators. In the Implicit Manipulative Evaluation, we assess the video-action consistency of World Simulators by evaluating whether the generated situation-aware video can be accurately translated into the correct control signals in dynamic environments. Our comprehensive evaluation offers key insights that can drive further innovation in video generation models, positioning World Simulators as a pivotal advancement toward embodied artificial intelligence.