< Explain other AI papers

Next Block Prediction: Video Generation via Semi-Autoregressive Modeling

Shuhuai Ren, Shuming Ma, Xu Sun, Furu Wei

2025-02-13

Next Block Prediction: Video Generation via Semi-Autoregressive Modeling

Summary

This paper talks about a new way to make AI generate videos faster and better called Next Block Prediction (NBP). It's like teaching a computer to predict what comes next in a video by looking at bigger chunks instead of tiny pieces.

What's the problem?

The current way of making AI generate videos, called Next-Token Prediction (NTP), is slow and doesn't understand the relationships between different parts of a video very well. It's like trying to guess a whole story by only looking at one word at a time.

What's the solution?

The researchers created NBP, which looks at bigger parts of the video (like whole rows or frames) at once. This new method lets the AI understand how different parts of the video relate to each other better and work faster. They tested it on different video datasets and found that it could create videos much quicker and with better quality than the old method.

Why it matters?

This matters because it could make AI-generated videos look more realistic and be created much faster. This could be really useful for things like making special effects in movies, creating virtual reality experiences, or even helping to predict and visualize weather patterns. It's a big step forward in making AI better at understanding and creating visual content.

Abstract

Next-Token Prediction (NTP) is a de facto approach for autoregressive (AR) video generation, but it suffers from suboptimal unidirectional dependencies and slow inference speed. In this work, we propose a semi-autoregressive (semi-AR) framework, called Next-Block Prediction (NBP), for video generation. By uniformly decomposing video content into equal-sized blocks (e.g., rows or frames), we shift the generation unit from individual tokens to blocks, allowing each token in the current block to simultaneously predict the corresponding token in the next block. Unlike traditional AR modeling, our framework employs bidirectional attention within each block, enabling tokens to capture more robust spatial dependencies. By predicting multiple tokens in parallel, NBP models significantly reduce the number of generation steps, leading to faster and more efficient inference. Our model achieves FVD scores of 103.3 on UCF101 and 25.5 on K600, outperforming the vanilla NTP model by an average of 4.4. Furthermore, thanks to the reduced number of inference steps, the NBP model generates 8.89 frames (128x128 resolution) per second, achieving an 11x speedup. We also explored model scales ranging from 700M to 3B parameters, observing significant improvements in generation quality, with FVD scores dropping from 103.3 to 55.3 on UCF101 and from 25.5 to 19.5 on K600, demonstrating the scalability of our approach.