< Explain other AI papers

Inductive Moment Matching

Linqi Zhou, Stefano Ermon, Jiaming Song

2025-03-12

Inductive Moment Matching

Summary

This paper talks about Inductive Moment Matching (IMM), a new AI method that creates high-quality images in just a few steps without the long processing times of older models, like drawing a detailed picture with just a handful of strokes instead of hundreds.

What's the problem?

Current AI image generators that make super-detailed pictures take way too many steps and crash easily when people try to speed them up.

What's the solution?

IMM trains AI to focus on the most important details first and refine them step-by-step, skipping unnecessary work while keeping the image quality high.

Why it matters?

This lets artists, designers, and researchers generate professional images quickly on regular computers, saving time and energy compared to older methods.

Abstract

Diffusion models and Flow Matching generate high-quality samples but are slow at inference, and distilling them into few-step models often leads to instability and extensive tuning. To resolve these trade-offs, we propose Inductive Moment Matching (IMM), a new class of generative models for one- or few-step sampling with a single-stage training procedure. Unlike distillation, IMM does not require pre-training initialization and optimization of two networks; and unlike Consistency Models, IMM guarantees distribution-level convergence and remains stable under various hyperparameters and standard model architectures. IMM surpasses diffusion models on ImageNet-256x256 with 1.99 FID using only 8 inference steps and achieves state-of-the-art 2-step FID of 1.98 on CIFAR-10 for a model trained from scratch.