< Explain other AI papers

pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation

Hansheng Chen, Kai Zhang, Hao Tan, Leonidas Guibas, Gordon Wetzstein, Sai Bi

2025-10-17

pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation

Summary

This paper introduces a new way to train generative models, specifically those that create images, by making the training process more efficient and improving the quality of the images produced.

What's the problem?

Current methods for training these types of models involve a 'teacher' model that's good at creating images and a 'student' model that learns from the teacher. However, the teacher and student models work in different ways, creating a mismatch that makes training difficult and often forces a trade-off between creating diverse images and high-quality images. Essentially, you could get lots of different images, but they wouldn't look very good, or you could get high-quality images, but they'd all look very similar.

What's the solution?

The researchers developed a model called pi-Flow. Instead of trying to make the student model directly copy the teacher's complex process, pi-Flow has the student model learn a simple 'policy' – a set of instructions – for how to change the image step-by-step to become clearer. This policy is easy to calculate and doesn't require a lot of extra processing. They then use a clever training technique to make sure the student's step-by-step changes match the teacher's overall results, focusing on matching the direction and speed of the changes.

Why it matters?

This new approach allows for faster and more stable training of image generation models, and importantly, it avoids the common problem of having to choose between image quality and diversity. The results show pi-Flow creates images that are both high-quality and varied, even surpassing existing methods in performance on challenging image datasets.

Abstract

Few-step diffusion or flow-based generative models typically distill a velocity-predicting teacher into a student that predicts a shortcut towards denoised data. This format mismatch has led to complex distillation procedures that often suffer from a quality-diversity trade-off. To address this, we propose policy-based flow models (pi-Flow). pi-Flow modifies the output layer of a student flow model to predict a network-free policy at one timestep. The policy then produces dynamic flow velocities at future substeps with negligible overhead, enabling fast and accurate ODE integration on these substeps without extra network evaluations. To match the policy's ODE trajectory to the teacher's, we introduce a novel imitation distillation approach, which matches the policy's velocity to the teacher's along the policy's trajectory using a standard ell_2 flow matching loss. By simply mimicking the teacher's behavior, pi-Flow enables stable and scalable training and avoids the quality-diversity trade-off. On ImageNet 256^2, it attains a 1-NFE FID of 2.85, outperforming MeanFlow of the same DiT architecture. On FLUX.1-12B and Qwen-Image-20B at 4 NFEs, pi-Flow achieves substantially better diversity than state-of-the-art few-step methods, while maintaining teacher-level quality.