FullPart: Generating each 3D Part at Full Resolution
Lihe Ding, Shaocong Dong, Yaokun Li, Chenjian Gao, Xiao Chen, Rui Han, Yihao Kuang, Hong Zhang, Bo Huang, Zhanpeng Huang, Zibin Wang, Dan Xu, Tianfan Xue
2025-10-31
Summary
This paper introduces a new method, called FullPart, for creating detailed 3D models from individual parts. It's about building complex 3D objects by first figuring out where the parts go, and then creating each part with a lot of detail.
What's the problem?
Existing methods for generating 3D parts have weaknesses. Some use a simple way to represent parts that doesn't capture enough detail, making the final model look rough. Others try to represent all parts in a single, shared grid of 3D space (voxels), which means smaller parts don't get enough space to be created with good quality. Essentially, it's hard to balance detail and efficiency when building 3D objects from parts.
What's the solution?
FullPart solves this by using a two-step process. First, it uses a technique called diffusion to quickly determine the overall arrangement and size of the parts using simple 'box' shapes. Then, it creates each individual part in its *own* high-resolution voxel grid, giving even small parts enough space for fine details. To make sure the parts fit together logically despite being different sizes, they also developed a way to encode the center point of each part. Finally, they created a large dataset of 3D parts, called PartVerse-XL, to help train and test their method.
Why it matters?
This work is important because it allows for the creation of much more detailed and realistic 3D models from parts than previously possible. This has applications in many areas, like creating virtual objects for games, designing products, or even robotics. The new dataset they created will also help other researchers improve 3D part generation in the future.
Abstract
Part-based 3D generation holds great potential for various applications. Previous part generators that represent parts using implicit vector-set tokens often suffer from insufficient geometric details. Another line of work adopts an explicit voxel representation but shares a global voxel grid among all parts; this often causes small parts to occupy too few voxels, leading to degraded quality. In this paper, we propose FullPart, a novel framework that combines both implicit and explicit paradigms. It first derives the bounding box layout through an implicit box vector-set diffusion process, a task that implicit diffusion handles effectively since box tokens contain little geometric detail. Then, it generates detailed parts, each within its own fixed full-resolution voxel grid. Instead of sharing a global low-resolution space, each part in our method - even small ones - is generated at full resolution, enabling the synthesis of intricate details. We further introduce a center-point encoding strategy to address the misalignment issue when exchanging information between parts of different actual sizes, thereby maintaining global coherence. Moreover, to tackle the scarcity of reliable part data, we present PartVerse-XL, the largest human-annotated 3D part dataset to date with 40K objects and 320K parts. Extensive experiments demonstrate that FullPart achieves state-of-the-art results in 3D part generation. We will release all code, data, and model to benefit future research in 3D part generation.