< Explain other AI papers

RecTok: Reconstruction Distillation along Rectified Flow

Qingyu Shi, Size Wu, Jinbin Bai, Kaidong Yu, Yujing Wang, Yunhai Tong, Xiangtai Li, Xuelong Li

2025-12-16

RecTok: Reconstruction Distillation along Rectified Flow

Summary

This paper introduces a new method, called RecTok, for improving how images are processed and generated by diffusion models, which are a type of AI used for creating realistic images.

What's the problem?

Diffusion models use something called a 'latent space' to represent images in a compressed form. The size of this space is a tricky balance: making it too small loses important details, but making it too large actually makes the image generation *worse*. While researchers have tried using powerful existing image models to help, these still don't perform as well as simpler, smaller latent spaces. Essentially, high-dimensional representations aren't effectively capturing the important information needed for good image quality.

What's the solution?

RecTok tackles this problem by focusing on improving the 'flow' of information *during* the image creation process, rather than just the final compressed representation. It does this in two main ways: first, it transfers the knowledge from those powerful existing image models into how the image is gradually built up, and second, it forces the model to reconstruct missing parts of the image, which helps it learn more meaningful features. This makes the process of building the image itself more informative and leads to better results.

Why it matters?

RecTok is important because it allows diffusion models to use larger, more detailed latent spaces without sacrificing image quality. This leads to more realistic and higher-quality images, and the method consistently improves performance as the size of the latent space increases. It sets a new standard for image generation, achieving the best results on common benchmarks and offering a way to create even more impressive AI-generated visuals.

Abstract

Visual tokenizers play a crucial role in diffusion models. The dimensionality of latent space governs both reconstruction fidelity and the semantic expressiveness of the latent feature. However, a fundamental trade-off is inherent between dimensionality and generation quality, constraining existing methods to low-dimensional latent spaces. Although recent works have leveraged vision foundation models to enrich the semantics of visual tokenizers and accelerate convergence, high-dimensional tokenizers still underperform their low-dimensional counterparts. In this work, we propose RecTok, which overcomes the limitations of high-dimensional visual tokenizers through two key innovations: flow semantic distillation and reconstruction--alignment distillation. Our key insight is to make the forward flow in flow matching semantically rich, which serves as the training space of diffusion transformers, rather than focusing on the latent space as in previous works. Specifically, our method distills the semantic information in VFMs into the forward flow trajectories in flow matching. And we further enhance the semantics by introducing a masked feature reconstruction loss. Our RecTok achieves superior image reconstruction, generation quality, and discriminative performance. It achieves state-of-the-art results on the gFID-50K under both with and without classifier-free guidance settings, while maintaining a semantically rich latent space structure. Furthermore, as the latent dimensionality increases, we observe consistent improvements. Code and model are available at https://shi-qingyu.github.io/rectok.github.io.