ResTok: Learning Hierarchical Residuals in 1D Visual Tokenizers for Autoregressive Image Generation
Xu Zhang, Cheng Da, Huan Yang, Kun Gai, Ming Lu, Zhan Ma
2026-01-08
Summary
This paper introduces a new way to break down images into a format that makes it easier for computers to generate new, realistic images. It focuses on improving how images are 'tokenized' – essentially, how they're converted into a series of building blocks for an AI to understand and recreate.
What's the problem?
Current methods for turning images into these building blocks largely copy how language is processed in AI. While this works okay, images aren't like sentences. Images have a natural hierarchy – details build into shapes, shapes build into objects – and important information is often 'residual,' meaning it's the difference between levels of detail. Existing methods miss these key visual characteristics, making image generation less efficient and sometimes lower quality.
What's the solution?
The researchers developed a new 'tokenizer' called ResTok. This tokenizer builds a hierarchy of image tokens, similar to how images are naturally structured. It also uses 'residual' connections, which help preserve important details across different levels of the hierarchy. To speed up the image creation process, they also created a generator that predicts entire levels of image details at once, instead of one detail at a time.
Why it matters?
This work is important because it brings ideas from successful image processing techniques – like hierarchical structures and residual connections – into the world of AI image generation. By doing so, they significantly improved the quality of generated images, achieving a state-of-the-art result with fewer steps needed to create the image, making the process faster and more efficient.
Abstract
Existing 1D visual tokenizers for autoregressive (AR) generation largely follow the design principles of language modeling, as they are built directly upon transformers whose priors originate in language, yielding single-hierarchy latent tokens and treating visual data as flat sequential token streams. However, this language-like formulation overlooks key properties of vision, particularly the hierarchical and residual network designs that have long been essential for convergence and efficiency in visual models. To bring "vision" back to vision, we propose the Residual Tokenizer (ResTok), a 1D visual tokenizer that builds hierarchical residuals for both image tokens and latent tokens. The hierarchical representations obtained through progressively merging enable cross-level feature fusion at each layer, substantially enhancing representational capacity. Meanwhile, the semantic residuals between hierarchies prevent information overlap, yielding more concentrated latent distributions that are easier for AR modeling. Cross-level bindings consequently emerge without any explicit constraints. To accelerate the generation process, we further introduce a hierarchical AR generator that substantially reduces sampling steps by predicting an entire level of latent tokens at once rather than generating them strictly token-by-token. Extensive experiments demonstrate that restoring hierarchical residual priors in visual tokenization significantly improves AR image generation, achieving a gFID of 2.34 on ImageNet-256 with only 9 sampling steps. Code is available at https://github.com/Kwai-Kolors/ResTok.