DimensionX: Create Any 3D and 4D Scenes from a Single Image with Controllable Video Diffusion
Wenqiang Sun, Shuo Chen, Fangfu Liu, Zilong Chen, Yueqi Duan, Jun Zhang, Yikai Wang
2024-11-08

Summary
This paper introduces DimensionX, a new framework that allows users to create realistic 3D and 4D scenes from just a single image using advanced video diffusion techniques.
What's the problem?
Creating detailed 3D and 4D scenes from images is challenging because traditional methods struggle with accurately representing the spatial (3D) and temporal (4D) aspects of a scene. This makes it hard to generate videos that look realistic and show movement over time.
What's the solution?
DimensionX uses a method called controllable video diffusion, which separates the spatial and temporal elements of a scene. This allows the system to generate video frames that accurately reflect both the structure of the scene and how it changes over time. It includes tools like ST-Director, which helps manage these dimensions effectively. Additionally, it uses techniques to ensure that the generated scenes match real-world appearances closely.
Why it matters?
This research is significant because it opens up new possibilities for creating dynamic and immersive environments in fields like gaming, virtual reality, and film production. By enabling high-quality scene generation from just one image, DimensionX can help creators produce stunning visuals more easily and efficiently.
Abstract
In this paper, we introduce DimensionX, a framework designed to generate photorealistic 3D and 4D scenes from just a single image with video diffusion. Our approach begins with the insight that both the spatial structure of a 3D scene and the temporal evolution of a 4D scene can be effectively represented through sequences of video frames. While recent video diffusion models have shown remarkable success in producing vivid visuals, they face limitations in directly recovering 3D/4D scenes due to limited spatial and temporal controllability during generation. To overcome this, we propose ST-Director, which decouples spatial and temporal factors in video diffusion by learning dimension-aware LoRAs from dimension-variant data. This controllable video diffusion approach enables precise manipulation of spatial structure and temporal dynamics, allowing us to reconstruct both 3D and 4D representations from sequential frames with the combination of spatial and temporal dimensions. Additionally, to bridge the gap between generated videos and real-world scenes, we introduce a trajectory-aware mechanism for 3D generation and an identity-preserving denoising strategy for 4D generation. Extensive experiments on various real-world and synthetic datasets demonstrate that DimensionX achieves superior results in controllable video generation, as well as in 3D and 4D scene generation, compared with previous methods.