< Explain other AI papers

Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective

Yongxin Zhu, Bocheng Li, Hang Zhang, Xin Li, Linli Xu, Lidong Bing

2024-10-17

Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective

Summary

This paper discusses a new approach to improve how image generation models work by stabilizing the latent space, which is the compressed representation of images that these models use to create new visuals.

What's the problem?

Latent-based image generative models, like Latent Diffusion Models (LDMs) and Mask Image Models (MIMs), have been successful in generating images, but they often struggle with autoregressive models, which generate images one pixel at a time. This leads to questions about whether using a latent space is the best choice for generating images, especially since autoregressive models have performed better in natural language processing tasks.

What's the solution?

To address this issue, the authors propose a unified perspective on how latent spaces can be stabilized for better image generation. They introduce a new tokenizer called DiGIT that helps stabilize the latent space and improve image generation by making it easier for the model to predict the next part of an image based on what it has already generated. Their experiments show that this new approach allows autoregressive models to outperform traditional methods like LDMs, especially as the model size increases.

Why it matters?

This research is important because it enhances the capabilities of image generation models, making them more effective and efficient. By optimizing how these models handle latent spaces and incorporating new tokenization methods, this work can lead to better quality images and more advanced applications in fields like digital art, video games, and virtual reality.

Abstract

Latent-based image generative models, such as Latent Diffusion Models (LDMs) and Mask Image Models (MIMs), have achieved notable success in image generation tasks. These models typically leverage reconstructive autoencoders like VQGAN or VAE to encode pixels into a more compact latent space and learn the data distribution in the latent space instead of directly from pixels. However, this practice raises a pertinent question: Is it truly the optimal choice? In response, we begin with an intriguing observation: despite sharing the same latent space, autoregressive models significantly lag behind LDMs and MIMs in image generation. This finding contrasts sharply with the field of NLP, where the autoregressive model GPT has established a commanding presence. To address this discrepancy, we introduce a unified perspective on the relationship between latent space and generative models, emphasizing the stability of latent space in image generative modeling. Furthermore, we propose a simple but effective discrete image tokenizer to stabilize the latent space for image generative modeling. Experimental results show that image autoregressive modeling with our tokenizer (DiGIT) benefits both image understanding and image generation with the next token prediction principle, which is inherently straightforward for GPT models but challenging for other generative models. Remarkably, for the first time, a GPT-style autoregressive model for images outperforms LDMs, which also exhibits substantial improvement akin to GPT when scaling up model size. Our findings underscore the potential of an optimized latent space and the integration of discrete tokenization in advancing the capabilities of image generative models. The code is available at https://github.com/DAMO-NLP-SG/DiGIT.