< Explain other AI papers

AnyDressing: Customizable Multi-Garment Virtual Dressing via Latent Diffusion Models

Xinghui Li, Qichao Sun, Pengze Zhang, Fulong Ye, Zhichao Liao, Wanquan Feng, Songtao Zhao, Qian He

2024-12-06

AnyDressing: Customizable Multi-Garment Virtual Dressing via Latent Diffusion Models

Summary

This paper introduces AnyDressing, a new method for creating customizable virtual outfits using advanced image generation techniques that allow characters to wear any combination of garments based on text prompts.

What's the problem?

Existing methods for generating images of characters in different outfits struggle to handle various combinations of clothing and often fail to maintain the details of each garment. This makes it difficult to create realistic and diverse virtual dressing experiences, limiting their effectiveness in fashion applications.

What's the solution?

AnyDressing solves this problem by using two main networks: GarmentsNet and DressingNet. GarmentsNet focuses on extracting detailed features from individual garments, ensuring that each piece of clothing is accurately represented. DressingNet then combines these features to generate images of characters wearing multiple garments, using a special attention mechanism to ensure that each garment is placed correctly on the character. This method allows for high-quality images that are true to the text prompts provided, and it includes a strategy for enhancing the textures of the garments.

Why it matters?

This research is important because it enhances the way virtual dressing rooms work, making them more versatile and user-friendly. By allowing users to customize outfits easily and accurately, AnyDressing can improve online shopping experiences, help designers visualize their creations, and support various applications in fashion technology.

Abstract

Recent advances in garment-centric image generation from text and image prompts based on diffusion models are impressive. However, existing methods lack support for various combinations of attire, and struggle to preserve the garment details while maintaining faithfulness to the text prompts, limiting their performance across diverse scenarios. In this paper, we focus on a new task, i.e., Multi-Garment Virtual Dressing, and we propose a novel AnyDressing method for customizing characters conditioned on any combination of garments and any personalized text prompts. AnyDressing comprises two primary networks named GarmentsNet and DressingNet, which are respectively dedicated to extracting detailed clothing features and generating customized images. Specifically, we propose an efficient and scalable module called Garment-Specific Feature Extractor in GarmentsNet to individually encode garment textures in parallel. This design prevents garment confusion while ensuring network efficiency. Meanwhile, we design an adaptive Dressing-Attention mechanism and a novel Instance-Level Garment Localization Learning strategy in DressingNet to accurately inject multi-garment features into their corresponding regions. This approach efficiently integrates multi-garment texture cues into generated images and further enhances text-image consistency. Additionally, we introduce a Garment-Enhanced Texture Learning strategy to improve the fine-grained texture details of garments. Thanks to our well-craft design, AnyDressing can serve as a plug-in module to easily integrate with any community control extensions for diffusion models, improving the diversity and controllability of synthesized images. Extensive experiments show that AnyDressing achieves state-of-the-art results.