ClotheDreamer: Text-Guided Garment Generation with 3D Gaussians
Yufei Liu, Junshu Tang, Chu Zheng, Shijie Zhang, Jinkun Hao, Junwei Zhu, Dongjin Huang
2024-06-25

Summary
This paper introduces ClotheDreamer, a new system that allows users to create high-quality 3D models of clothing just by describing them in text. It uses advanced techniques to generate realistic garments that can be used for digital avatars and virtual try-ons.
What's the problem?
Creating 3D models of clothing can be difficult and time-consuming, especially when using traditional methods that require detailed design skills and software. Existing approaches often struggle with generating garments that fit well or look realistic, particularly when they are meant to be worn by digital avatars.
What's the solution?
ClotheDreamer addresses these challenges by using a method called Disentangled Clothe Gaussian Splatting (DCGS), which simplifies the process of generating clothing models. This technique allows the system to treat the garment and the body separately, making it easier to create diverse and wearable clothing from text descriptions. The system also uses bidirectional Score Distillation Sampling (SDS) to improve the quality of the generated garments and includes a pruning strategy to enhance loose clothing designs. This means users can input their clothing ideas in words, and ClotheDreamer will produce a realistic 3D garment that can be animated and tried on virtually.
Why it matters?
This research is significant because it makes it much easier for anyone, including those without design experience, to create custom clothing for digital use. By improving how 3D garments are generated, ClotheDreamer opens up new possibilities for fashion design, gaming, and virtual reality applications, allowing for more personalized and interactive experiences.
Abstract
High-fidelity 3D garment synthesis from text is desirable yet challenging for digital avatar creation. Recent diffusion-based approaches via Score Distillation Sampling (SDS) have enabled new possibilities but either intricately couple with human body or struggle to reuse. We introduce ClotheDreamer, a 3D Gaussian-based method for generating wearable, production-ready 3D garment assets from text prompts. We propose a novel representation Disentangled Clothe Gaussian Splatting (DCGS) to enable separate optimization. DCGS represents clothed avatar as one Gaussian model but freezes body Gaussian splats. To enhance quality and completeness, we incorporate bidirectional SDS to supervise clothed avatar and garment RGBD renderings respectively with pose conditions and propose a new pruning strategy for loose clothing. Our approach can also support custom clothing templates as input. Benefiting from our design, the synthetic 3D garment can be easily applied to virtual try-on and support physically accurate animation. Extensive experiments showcase our method's superior and competitive performance. Our project page is at https://ggxxii.github.io/clothedreamer.