< Explain other AI papers

Neighboring Autoregressive Modeling for Efficient Visual Generation

Yefei He, Yuanyu He, Shaoxuan He, Feng Chen, Hong Zhou, Kaipeng Zhang, Bohan Zhuang

2025-03-17

Neighboring Autoregressive Modeling for Efficient Visual Generation

Summary

This paper introduces Neighboring Autoregressive Modeling (NAR), a new way to generate images and videos using AI that's faster and more efficient.

What's the problem?

Existing AI models generate images and videos by predicting each pixel or frame one after another, which is slow because they don't take advantage of the fact that nearby pixels or frames are usually very similar.

What's the solution?

NAR generates images and videos by predicting neighboring pixels or frames together, starting from a central point and expanding outwards. This allows for parallel processing, significantly speeding up the generation process.

Why it matters?

This work matters because it makes AI image and video generation faster and more efficient, potentially leading to new applications and improved performance.

Abstract

Visual autoregressive models typically adhere to a raster-order ``next-token prediction" paradigm, which overlooks the spatial and temporal locality inherent in visual content. Specifically, visual tokens exhibit significantly stronger correlations with their spatially or temporally adjacent tokens compared to those that are distant. In this paper, we propose Neighboring Autoregressive Modeling (NAR), a novel paradigm that formulates autoregressive visual generation as a progressive outpainting procedure, following a near-to-far ``next-neighbor prediction" mechanism. Starting from an initial token, the remaining tokens are decoded in ascending order of their Manhattan distance from the initial token in the spatial-temporal space, progressively expanding the boundary of the decoded region. To enable parallel prediction of multiple adjacent tokens in the spatial-temporal space, we introduce a set of dimension-oriented decoding heads, each predicting the next token along a mutually orthogonal dimension. During inference, all tokens adjacent to the decoded tokens are processed in parallel, substantially reducing the model forward steps for generation. Experiments on ImageNet256times 256 and UCF101 demonstrate that NAR achieves 2.4times and 8.6times higher throughput respectively, while obtaining superior FID/FVD scores for both image and video generation tasks compared to the PAR-4X approach. When evaluating on text-to-image generation benchmark GenEval, NAR with 0.8B parameters outperforms Chameleon-7B while using merely 0.4 of the training data. Code is available at https://github.com/ThisisBillhe/NAR.