< Explain other AI papers

SegviGen: Repurposing 3D Generative Model for Part Segmentation

Lin Li, Haoran Feng, Zehuan Huang, Haohua Chen, Wenbo Nie, Shaohua Hou, Keqing Fan, Pan Hu, Sheng Wang, Buyu Li, Lu Sheng

2026-03-18

SegviGen: Repurposing 3D Generative Model for Part Segmentation

Summary

This paper introduces SegviGen, a new method for identifying different parts of 3D objects, like separating the wheels from the body of a car in a 3D model.

What's the problem?

Currently, accurately segmenting parts in 3D models is difficult. Existing methods either try to take information learned from 2D images and apply it to 3D, which can lead to inaccuracies and blurry results, or they require a huge amount of manually labeled 3D data to train effectively, which is expensive and time-consuming to create.

What's the solution?

SegviGen solves this by cleverly using existing 3D models that are good at *creating* 3D shapes. Instead of training a new system from scratch, it re-purposes these models to *recognize* parts. It does this by assigning unique colors to different parts of the object, making it easier to distinguish them. The system can work interactively, letting a user guide the segmentation, or fully automatically, and can even use 2D images as hints.

Why it matters?

This work is important because it significantly improves the accuracy of 3D part segmentation, achieving up to a 40% improvement over previous methods, while needing far less labeled data – only about 0.32% of what others require. This means you can get good results even without spending a ton of time and resources on labeling 3D data, and shows that pre-trained 3D models can be really useful for a variety of tasks.

Abstract

We introduce SegviGen, a framework that repurposes native 3D generative models for 3D part segmentation. Existing pipelines either lift strong 2D priors into 3D via distillation or multi-view mask aggregation, often suffering from cross-view inconsistency and blurred boundaries, or explore native 3D discriminative segmentation, which typically requires large-scale annotated 3D data and substantial training resources. In contrast, SegviGen leverages the structured priors encoded in pretrained 3D generative model to induce segmentation through distinctive part colorization, establishing a novel and efficient framework for part segmentation. Specifically, SegviGen encodes a 3D asset and predicts part-indicative colors on active voxels of a geometry-aligned reconstruction. It supports interactive part segmentation, full segmentation, and full segmentation with 2D guidance in a unified framework. Extensive experiments show that SegviGen improves over the prior state of the art by 40% on interactive part segmentation and by 15% on full segmentation, while using only 0.32% of the labeled training data. It demonstrates that pretrained 3D generative priors transfer effectively to 3D part segmentation, enabling strong performance with limited supervision. See our project page at https://fenghora.github.io/SegviGen-Page/.