Efficient Distillation of Classifier-Free Guidance using Adapters
Cristian Perez Jensen, Seyedmorteza Sadat
2025-03-11
Summary
This paper talks about a way to speed up AI image generators by teaching them to skip a slow step, like a shortcut that keeps the quality just as good while working twice as fast.
What's the problem?
Current AI image tools use a process that checks everything twice, making them slow and needing powerful computers, which is expensive and limits who can use them.
What's the solution?
The new method adds small, efficient parts (called adapters) to the AI that learn to predict the double-check step in one go, cutting the work in half without losing quality.
Why it matters?
This makes AI image tools faster and cheaper to run, so more people can use them for creative projects, design, or research without needing supercomputers.
Abstract
While classifier-free guidance (CFG) is essential for conditional diffusion models, it doubles the number of neural function evaluations (NFEs) per inference step. To mitigate this inefficiency, we introduce adapter guidance distillation (AGD), a novel approach that simulates CFG in a single forward pass. AGD leverages lightweight adapters to approximate CFG, effectively doubling the sampling speed while maintaining or even improving sample quality. Unlike prior guidance distillation methods that tune the entire model, AGD keeps the base model frozen and only trains minimal additional parameters (sim2%) to significantly reduce the resource requirement of the distillation phase. Additionally, this approach preserves the original model weights and enables the adapters to be seamlessly combined with other checkpoints derived from the same base model. We also address a key mismatch between training and inference in existing guidance distillation methods by training on CFG-guided trajectories instead of standard diffusion trajectories. Through extensive experiments, we show that AGD achieves comparable or superior FID to CFG across multiple architectures with only half the NFEs. Notably, our method enables the distillation of large models (sim2.6B parameters) on a single consumer GPU with 24 GB of VRAM, making it more accessible than previous approaches that require multiple high-end GPUs. We will publicly release the implementation of our method.