< Explain other AI papers

FastFit: Accelerating Multi-Reference Virtual Try-On via Cacheable Diffusion Models

Zheng Chong, Yanwei Lei, Shiyue Zhang, Zhuandi He, Zhen Wang, Xujie Zhang, Xiao Dong, Yiling Wu, Dongmei Jiang, Xiaodan Liang

2025-09-03

FastFit: Accelerating Multi-Reference Virtual Try-On via Cacheable Diffusion Models

Summary

This paper introduces a new system called FastFit for virtual try-on technology, aiming to make it faster and more capable of handling complete outfits, not just single items.

What's the problem?

Current virtual try-on systems struggle with two main issues. First, they aren't very good at showing you what a whole outfit looks like – combining clothes and accessories realistically. Second, they are slow because they repeatedly calculate information about the clothes you're trying on during every step of the process, which is a waste of computing power.

What's the solution?

The researchers developed FastFit, which uses a clever new design based on something called a 'cacheable diffusion architecture'. Essentially, they figured out a way to calculate the necessary information about the clothes *once* and then reuse it throughout the entire try-on process. They also used a special type of attention mechanism and a different way of feeding information to the model, all without adding a lot of extra complexity. This makes FastFit about 3.5 times faster than existing methods. To help others build on this work, they also created a large new dataset of clothing images called DressCode-MR.

Why it matters?

This work is important because it addresses key limitations preventing virtual try-on from becoming a truly practical tool. Faster speeds and the ability to try on full outfits will make online shopping more convenient and reduce the number of returns, ultimately benefiting both consumers and retailers. The new dataset also provides a valuable resource for further research in this field.

Abstract

Despite its great potential, virtual try-on technology is hindered from real-world application by two major challenges: the inability of current methods to support multi-reference outfit compositions (including garments and accessories), and their significant inefficiency caused by the redundant re-computation of reference features in each denoising step. To address these challenges, we propose FastFit, a high-speed multi-reference virtual try-on framework based on a novel cacheable diffusion architecture. By employing a Semi-Attention mechanism and substituting traditional timestep embeddings with class embeddings for reference items, our model fully decouples reference feature encoding from the denoising process with negligible parameter overhead. This allows reference features to be computed only once and losslessly reused across all steps, fundamentally breaking the efficiency bottleneck and achieving an average 3.5x speedup over comparable methods. Furthermore, to facilitate research on complex, multi-reference virtual try-on, we introduce DressCode-MR, a new large-scale dataset. It comprises 28,179 sets of high-quality, paired images covering five key categories (tops, bottoms, dresses, shoes, and bags), constructed through a pipeline of expert models and human feedback refinement. Extensive experiments on the VITON-HD, DressCode, and our DressCode-MR datasets show that FastFit surpasses state-of-the-art methods on key fidelity metrics while offering its significant advantage in inference efficiency.