PartGen: Part-level 3D Generation and Reconstruction with Multi-View Diffusion Models
Minghao Chen, Roman Shapovalov, Iro Laina, Tom Monnier, Jianyuan Wang, David Novotny, Andrea Vedaldi
2024-12-25

Summary
This paper talks about PartGen, a new method for creating and reconstructing 3D objects that are made up of meaningful parts, allowing for better manipulation and editing of these objects.
What's the problem?
While current technology can generate high-quality 3D shapes from text or images, these shapes often come as single, solid forms without any useful structure. This makes it difficult to work with them in applications where it's important to manipulate individual parts of the object, like editing or customizing them.
What's the solution?
To solve this problem, the authors developed PartGen, which generates 3D objects by breaking them down into distinct parts. It uses a multi-view diffusion model that first analyzes multiple angles of a 3D object to identify and segment it into parts. Then, it fills in any missing details for each part and reconstructs them into a complete 3D model. This approach ensures that the parts fit together well and can be manipulated independently, making it easier to work with complex objects.
Why it matters?
This research is important because it enhances how we create and interact with 3D objects in various fields such as gaming, virtual reality, and product design. By allowing for detailed part-level manipulation, PartGen can improve workflows in creative industries and make it easier for designers to customize and edit 3D assets.
Abstract
Text- or image-to-3D generators and 3D scanners can now produce 3D assets with high-quality shapes and textures. These assets typically consist of a single, fused representation, like an implicit neural field, a Gaussian mixture, or a mesh, without any useful structure. However, most applications and creative workflows require assets to be made of several meaningful parts that can be manipulated independently. To address this gap, we introduce PartGen, a novel approach that generates 3D objects composed of meaningful parts starting from text, an image, or an unstructured 3D object. First, given multiple views of a 3D object, generated or rendered, a multi-view diffusion model extracts a set of plausible and view-consistent part segmentations, dividing the object into parts. Then, a second multi-view diffusion model takes each part separately, fills in the occlusions, and uses those completed views for 3D reconstruction by feeding them to a 3D reconstruction network. This completion process considers the context of the entire object to ensure that the parts integrate cohesively. The generative completion model can make up for the information missing due to occlusions; in extreme cases, it can hallucinate entirely invisible parts based on the input 3D asset. We evaluate our method on generated and real 3D assets and show that it outperforms segmentation and part-extraction baselines by a large margin. We also showcase downstream applications such as 3D part editing.