FashionComposer: Compositional Fashion Image Generation
Sihui Ji, Yiyang Wang, Xi Chen, Xiaogang Xu, Hao Luo, Hengshuang Zhao
2024-12-19

Summary
This paper talks about FashionComposer, a new tool that allows users to create personalized fashion images by combining different inputs like text descriptions, human models, and images of clothing.
What's the problem?
Creating fashion images can be complicated because it often requires specific details about a person's appearance and the clothing they are wearing. Traditional methods might not allow for easy customization or might only work with limited types of inputs, making it difficult to generate unique and diverse fashion designs.
What's the solution?
FashionComposer solves this problem by being highly flexible and allowing multiple types of inputs at once. Users can provide a text prompt, a model's body shape, and images of garments or faces. The system uses a special framework to organize these inputs into an 'asset library' and applies something called subject-binding attention to ensure that the features from the different inputs are combined correctly in the final image. This way, users can create detailed and personalized fashion images easily.
Why it matters?
This research is important because it makes fashion design more accessible and creative. By allowing designers to combine various elements seamlessly, FashionComposer can help in generating unique fashion concepts quickly. This could benefit the fashion industry by enabling faster design processes, more diverse styles, and innovative virtual try-on experiences.
Abstract
We present FashionComposer for compositional fashion image generation. Unlike previous methods, FashionComposer is highly flexible. It takes multi-modal input (i.e., text prompt, parametric human model, garment image, and face image) and supports personalizing the appearance, pose, and figure of the human and assigning multiple garments in one pass. To achieve this, we first develop a universal framework capable of handling diverse input modalities. We construct scaled training data to enhance the model's robust compositional capabilities. To accommodate multiple reference images (garments and faces) seamlessly, we organize these references in a single image as an "asset library" and employ a reference UNet to extract appearance features. To inject the appearance features into the correct pixels in the generated result, we propose subject-binding attention. It binds the appearance features from different "assets" with the corresponding text features. In this way, the model could understand each asset according to their semantics, supporting arbitrary numbers and types of reference images. As a comprehensive solution, FashionComposer also supports many other applications like human album generation, diverse virtual try-on tasks, etc.