< Explain other AI papers

FiTv2: Scalable and Improved Flexible Vision Transformer for Diffusion Model

ZiDong Wang, Zeyu Lu, Di Huang, Cai Zhou, Wanli Ouyang, and Lei Bai

2024-10-21

FiTv2: Scalable and Improved Flexible Vision Transformer for Diffusion Model

Summary

This paper presents FiTv2, an advanced version of the Flexible Vision Transformer designed to generate images at any resolution and aspect ratio, improving the flexibility and performance of diffusion models.

What's the problem?

Current diffusion models, like Diffusion Transformers, struggle when processing images that are outside the resolutions they were trained on. This limits their ability to generate high-quality images across different sizes and shapes, making them less useful in real-world applications where images can vary greatly.

What's the solution?

To solve this problem, the authors propose a new approach that treats images as sequences of tokens with dynamic sizes instead of fixed grids. This allows for flexible training that can handle various image dimensions. They introduce FiT, a transformer architecture specifically designed for this purpose, and then upgrade it to FiTv2 with new features like Query-Key vector normalization and an efficient training method. These innovations help the model learn faster and adapt better to different image resolutions while maintaining high image quality.

Why it matters?

This research is significant because it enhances the capabilities of AI in generating images, making it easier to create visuals for different contexts without losing quality. This could lead to advancements in fields like graphic design, video game development, and virtual reality, where high-quality images are essential.

Abstract

Nature is infinitely resolution-free. In the context of this reality, existing diffusion models, such as Diffusion Transformers, often face challenges when processing image resolutions outside of their trained domain. To address this limitation, we conceptualize images as sequences of tokens with dynamic sizes, rather than traditional methods that perceive images as fixed-resolution grids. This perspective enables a flexible training strategy that seamlessly accommodates various aspect ratios during both training and inference, thus promoting resolution generalization and eliminating biases introduced by image cropping. On this basis, we present the Flexible Vision Transformer (FiT), a transformer architecture specifically designed for generating images with unrestricted resolutions and aspect ratios. We further upgrade the FiT to FiTv2 with several innovative designs, includingthe Query-Key vector normalization, the AdaLN-LoRA module, a rectified flow scheduler, and a Logit-Normal sampler. Enhanced by a meticulously adjusted network structure, FiTv2 exhibits 2times convergence speed of FiT. When incorporating advanced training-free extrapolation techniques, FiTv2 demonstrates remarkable adaptability in both resolution extrapolation and diverse resolution generation. Additionally, our exploration of the scalability of the FiTv2 model reveals that larger models exhibit better computational efficiency. Furthermore, we introduce an efficient post-training strategy to adapt a pre-trained model for the high-resolution generation. Comprehensive experiments demonstrate the exceptional performance of FiTv2 across a broad range of resolutions. We have released all the codes and models at https://github.com/whlzy/FiT to promote the exploration of diffusion transformer models for arbitrary-resolution image generation.