< Explain other AI papers

GenXD: Generating Any 3D and 4D Scenes

Yuyang Zhao, Chung-Ching Lin, Kevin Lin, Zhiwen Yan, Linjie Li, Zhengyuan Yang, Jianfeng Wang, Gim Hee Lee, Lijuan Wang

2024-11-05

GenXD: Generating Any 3D and 4D Scenes

Summary

This paper introduces GenXD, a new system designed to generate realistic 3D and 4D scenes. It uses a large dataset to better understand how objects and cameras move, allowing for the creation of dynamic environments.

What's the problem?

Creating 3D and 4D scenes (which include movement over time) is challenging because there isn't enough high-quality data available, and existing models often struggle with effectively generating these types of scenes. This makes it hard to produce realistic animations or simulations in real-world applications.

What's the solution?

To solve this problem, the authors developed a data curation pipeline that collects information about camera positions and object movements from videos. They created a large dataset called CamVid-30K, which contains this valuable data. Using this dataset, they built GenXD, which uses advanced techniques to separate camera movements from object movements, allowing it to generate both static 3D views and dynamic 4D scenes efficiently. GenXD can produce videos that follow the camera's path while maintaining consistent 3D representations.

Why it matters?

This research is significant because it simplifies the process of creating high-quality 3D and 4D scenes, making it easier for developers in fields like gaming, film, and virtual reality to produce engaging content. By providing a system that can automatically generate these scenes without needing complex manual work, GenXD has the potential to revolutionize how visual content is created.

Abstract

Recent developments in 2D visual generation have been remarkably successful. However, 3D and 4D generation remain challenging in real-world applications due to the lack of large-scale 4D data and effective model design. In this paper, we propose to jointly investigate general 3D and 4D generation by leveraging camera and object movements commonly observed in daily life. Due to the lack of real-world 4D data in the community, we first propose a data curation pipeline to obtain camera poses and object motion strength from videos. Based on this pipeline, we introduce a large-scale real-world 4D scene dataset: CamVid-30K. By leveraging all the 3D and 4D data, we develop our framework, GenXD, which allows us to produce any 3D or 4D scene. We propose multiview-temporal modules, which disentangle camera and object movements, to seamlessly learn from both 3D and 4D data. Additionally, GenXD employs masked latent conditions to support a variety of conditioning views. GenXD can generate videos that follow the camera trajectory as well as consistent 3D views that can be lifted into 3D representations. We perform extensive evaluations across various real-world and synthetic datasets, demonstrating GenXD's effectiveness and versatility compared to previous methods in 3D and 4D generation.