< Explain other AI papers

GenieDrive: Towards Physics-Aware Driving World Model with 4D Occupancy Guided Video Generation

Zhenya Yang, Zhe Liu, Yuxiang Lu, Liping Hou, Chenxuan Miao, Siyi Peng, Bailan Feng, Xiang Bai, Hengshuang Zhao

2025-12-16

GenieDrive: Towards Physics-Aware Driving World Model with 4D Occupancy Guided Video Generation

Summary

This paper introduces GenieDrive, a new system for creating realistic driving videos that follow the laws of physics. It aims to make these videos more predictable and useful for things like planning routes for self-driving cars and testing those cars in simulations.

What's the problem?

Current methods for generating driving videos using artificial intelligence often struggle to create videos that are physically realistic. They typically try to directly translate driving actions into video frames, which is difficult and can result in impossible or unnatural movements. Existing systems also create videos that aren't consistent when viewed from different angles and require a lot of computational resources.

What's the solution?

GenieDrive tackles this by first creating a detailed 3D representation of the environment and how it changes over time, called a 4D occupancy map. This map contains information about the shape and movement of objects. To make this map manageable, they use a special type of AI called a VAE to compress it. They also developed a 'Mutual Control Attention' technique to ensure the driving actions correctly influence how the 3D world evolves. Finally, they use another AI model with 'Normalized Multi-View Attention' to generate the actual video from multiple viewpoints, guided by the 4D occupancy map.

Why it matters?

This work is important because more realistic driving simulations are crucial for developing safe and reliable self-driving cars. GenieDrive improves the quality and accuracy of these simulations, making them more effective for testing and training. It also does this efficiently, with a relatively small model size and fast video generation speed, making it practical for real-world applications.

Abstract

Physics-aware driving world model is essential for drive planning, out-of-distribution data synthesis, and closed-loop evaluation. However, existing methods often rely on a single diffusion model to directly map driving actions to videos, which makes learning difficult and leads to physically inconsistent outputs. To overcome these challenges, we propose GenieDrive, a novel framework designed for physics-aware driving video generation. Our approach starts by generating 4D occupancy, which serves as a physics-informed foundation for subsequent video generation. 4D occupancy contains rich physical information, including high-resolution 3D structures and dynamics. To facilitate effective compression of such high-resolution occupancy, we propose a VAE that encodes occupancy into a latent tri-plane representation, reducing the latent size to only 58% of that used in previous methods. We further introduce Mutual Control Attention (MCA) to accurately model the influence of control on occupancy evolution, and we jointly train the VAE and the subsequent prediction module in an end-to-end manner to maximize forecasting accuracy. Together, these designs yield a 7.2% improvement in forecasting mIoU at an inference speed of 41 FPS, while using only 3.47 M parameters. Additionally, a Normalized Multi-View Attention is introduced in the video generation model to generate multi-view driving videos with guidance from our 4D occupancy, significantly improving video quality with a 20.7% reduction in FVD. Experiments demonstrate that GenieDrive enables highly controllable, multi-view consistent, and physics-aware driving video generation.