< Explain other AI papers

Improving Autoregressive Image Generation through Coarse-to-Fine Token Prediction

Ziyao Guo, Kaipeng Zhang, Michael Qizhe Shieh

2025-03-21

Improving Autoregressive Image Generation through Coarse-to-Fine Token
  Prediction

Summary

This paper is about improving the way AI creates images by making it easier to predict the small details that make up the picture.

What's the problem?

AI image generators struggle to create high-quality images because they have trouble with the fine details, especially when using large amounts of data.

What's the solution?

The researchers developed a method where the AI first predicts the general outline of the image and then fills in the details. It's like sketching a picture before adding the colors.

Why it matters?

This work matters because it can lead to AI that generates more realistic and detailed images, faster than before.

Abstract

Autoregressive models have shown remarkable success in image generation by adapting sequential prediction techniques from language modeling. However, applying these approaches to images requires discretizing continuous pixel data through vector quantization methods like VQ-VAE. To alleviate the quantization errors that existed in VQ-VAE, recent works tend to use larger codebooks. However, this will accordingly expand vocabulary size, complicating the autoregressive modeling task. This paper aims to find a way to enjoy the benefits of large codebooks without making autoregressive modeling more difficult. Through empirical investigation, we discover that tokens with similar codeword representations produce similar effects on the final generated image, revealing significant redundancy in large codebooks. Based on this insight, we propose to predict tokens from coarse to fine (CTF), realized by assigning the same coarse label for similar tokens. Our framework consists of two stages: (1) an autoregressive model that sequentially predicts coarse labels for each token in the sequence, and (2) an auxiliary model that simultaneously predicts fine-grained labels for all tokens conditioned on their coarse labels. Experiments on ImageNet demonstrate our method's superior performance, achieving an average improvement of 59 points in Inception Score compared to baselines. Notably, despite adding an inference step, our approach achieves faster sampling speeds.