HarmoniCa: Harmonizing Training and Inference for Better Feature Cache in Diffusion Transformer Acceleration
Yushi Huang, Zining Wang, Ruihao Gong, Jing Liu, Xinjie Zhang, Jinyang Guo, Xianglong Liu, Jun Zhang
2024-10-03

Summary
This paper introduces HarmoniCa, a new method designed to improve the efficiency and quality of image generation in diffusion transformers by harmonizing the training and inference processes.
What's the problem?
Diffusion transformers, which are used for generating images, can be slow and inefficient during the inference stage (when they create images). Existing methods for caching information during this process often lead to inconsistencies between how the model is trained and how it performs when generating images. This can result in longer processing times and lower quality images.
What's the solution?
To solve these problems, the authors developed HarmoniCa, a framework that aligns the training process with the inference process. They introduced two main techniques: Step-Wise Denoising Training (SDT), which helps the model learn from previous steps in a way that mirrors how it will generate images later, and Image Error Proxy-Guided Objective (IEPO), which helps balance the quality of the final image with how effectively the model uses cached information. This approach allows for better performance during image generation without sacrificing speed.
Why it matters?
This research is important because it enhances the capabilities of diffusion transformers, making them faster and more efficient at creating high-quality images. By improving how these models learn and operate, HarmoniCa can lead to advancements in various applications like video game graphics, animation, and virtual reality, where high-quality visuals are essential.
Abstract
Diffusion Transformers (DiTs) have gained prominence for outstanding scalability and extraordinary performance in generative tasks. However, their considerable inference costs impede practical deployment. The feature cache mechanism, which involves storing and retrieving redundant computations across timesteps, holds promise for reducing per-step inference time in diffusion models. Most existing caching methods for DiT are manually designed. Although the learning-based approach attempts to optimize strategies adaptively, it suffers from discrepancies between training and inference, which hampers both the performance and acceleration ratio. Upon detailed analysis, we pinpoint that these discrepancies primarily stem from two aspects: (1) Prior Timestep Disregard, where training ignores the effect of cache usage at earlier timesteps, and (2) Objective Mismatch, where the training target (align predicted noise in each timestep) deviates from the goal of inference (generate the high-quality image). To alleviate these discrepancies, we propose HarmoniCa, a novel method that Harmonizes training and inference with a novel learning-based Caching framework built upon Step-Wise Denoising Training (SDT) and Image Error Proxy-Guided Objective (IEPO). Compared to the traditional training paradigm, the newly proposed SDT maintains the continuity of the denoising process, enabling the model to leverage information from prior timesteps during training, similar to the way it operates during inference. Furthermore, we design IEPO, which integrates an efficient proxy mechanism to approximate the final image error caused by reusing the cached feature. Therefore, IEPO helps balance final image quality and cache utilization, resolving the issue of training that only considers the impact of cache usage on the predicted output at each timestep.