< Explain other AI papers

Training Consistency Models with Variational Noise Coupling

Gianluigi Silvestri, Luca Ambrogioni, Chieh-Hsin Lai, Yuhta Takida, Yuki Mitsufuji

2025-02-28

Training Consistency Models with Variational Noise Coupling

Summary

This paper talks about a new way to improve how AI generates images using a method called Consistency Training (CT). The researchers introduced a technique called Variational Noise Coupling to make this process more stable and effective.

What's the problem?

Consistency Training, which is used to generate images quickly and efficiently, often suffers from instability and high variation during training. This makes it difficult for the AI to consistently produce high-quality images, especially when compared to other methods like diffusion models.

What's the solution?

The researchers developed a new approach based on the Flow Matching framework. They introduced a noise-coupling system inspired by Variational Autoencoders (VAE), which helps the AI better understand how to map noisy data back into clear images. By training the model with this system, they were able to improve the AI's ability to create high-quality images while maintaining stability during training.

Why it matters?

This matters because it makes image generation faster, more reliable, and less resource-intensive. The improved method outperforms previous approaches on popular datasets like CIFAR-10 and ImageNet, showing that it can generate high-quality images efficiently. This could lead to better applications in fields like art, design, and virtual reality, where quick and reliable image generation is essential.

Abstract

Consistency Training (CT) has recently emerged as a promising alternative to diffusion models, achieving competitive performance in image generation tasks. However, non-distillation consistency training often suffers from high variance and instability, and analyzing and improving its training dynamics is an active area of research. In this work, we propose a novel CT training approach based on the Flow Matching framework. Our main contribution is a trained noise-coupling scheme inspired by the architecture of Variational Autoencoders (VAE). By training a data-dependent noise emission model implemented as an encoder architecture, our method can indirectly learn the geometry of the noise-to-data mapping, which is instead fixed by the choice of the forward process in classical CT. Empirical results across diverse image datasets show significant generative improvements, with our model outperforming baselines and achieving the state-of-the-art (SoTA) non-distillation CT FID on CIFAR-10, and attaining FID on par with SoTA on ImageNet at 64 times 64 resolution in 2-step generation. Our code is available at https://github.com/sony/vct .