Parallelized Autoregressive Visual Generation
Yuqing Wang, Shuhuai Ren, Zhijie Lin, Yujin Han, Haoyuan Guo, Zhenheng Yang, Difan Zou, Jiashi Feng, Xihui Liu
2024-12-23

Summary
This paper talks about Parallelized Autoregressive Visual Generation, a new method that improves how quickly and efficiently AI can generate images and videos by allowing some parts of the process to happen at the same time instead of one after the other.
What's the problem?
Traditional autoregressive models generate images by predicting each part (or token) one at a time, which can be very slow. This sequential process limits the speed at which images and videos can be created, making it inefficient for real-time applications.
What's the solution?
The authors propose a new approach that identifies which parts of an image can be generated at the same time (in parallel) and which parts need to be done in order (sequentially). By focusing on generating distant tokens with weak dependencies in parallel while keeping closely related tokens in sequence, they can speed up the generation process significantly. Their method can be easily added to existing models without needing major changes.
Why it matters?
This research is important because it makes generating visual content much faster, which is crucial for applications like video games, movies, and virtual reality. By improving efficiency without sacrificing quality, this method opens up new possibilities for creating high-quality visuals quickly.
Abstract
Autoregressive models have emerged as a powerful approach for visual generation but suffer from slow inference speed due to their sequential token-by-token prediction process. In this paper, we propose a simple yet effective approach for parallelized autoregressive visual generation that improves generation efficiency while preserving the advantages of autoregressive modeling. Our key insight is that parallel generation depends on visual token dependencies-tokens with weak dependencies can be generated in parallel, while strongly dependent adjacent tokens are difficult to generate together, as their independent sampling may lead to inconsistencies. Based on this observation, we develop a parallel generation strategy that generates distant tokens with weak dependencies in parallel while maintaining sequential generation for strongly dependent local tokens. Our approach can be seamlessly integrated into standard autoregressive models without modifying the architecture or tokenizer. Experiments on ImageNet and UCF-101 demonstrate that our method achieves a 3.6x speedup with comparable quality and up to 9.5x speedup with minimal quality degradation across both image and video generation tasks. We hope this work will inspire future research in efficient visual generation and unified autoregressive modeling. Project page: https://epiphqny.github.io/PAR-project.