TokenTrim: Inference-Time Token Pruning for Autoregressive Long Video Generation
Ariel Shaulov, Eitan Shaar, Amit Edenzon, Lior Wolf
2026-02-11
Summary
This paper addresses a problem with creating long videos using artificial intelligence, specifically how the videos tend to fall apart and become inconsistent over time.
What's the problem?
When AI generates videos frame by frame, building on what it's already created, small errors can creep in with each new frame. These errors build up over time, causing the video to 'drift' away from a realistic or coherent path. Previous attempts to fix this focused on making the AI model bigger or changing how it learns, but this paper argues the issue isn't a lack of power, but rather how the AI uses information *while* it's generating the video.
What's the solution?
The researchers found that the AI relies on internal 'tokens' representing parts of the video to guide future frame creation. If these tokens become corrupted or inaccurate, they cause further errors. Their solution is a simple check during video generation: identify and remove these unstable tokens before they're used again. This prevents bad information from influencing the rest of the video, improving consistency without needing to retrain the AI or change its basic structure.
Why it matters?
This is important because it offers a practical way to create much longer and more stable videos with existing AI technology. Instead of needing more powerful and expensive AI models, this method improves the quality of videos generated by current systems, opening up possibilities for longer-form content creation and more realistic simulations.
Abstract
Auto-regressive video generation enables long video synthesis by iteratively conditioning each new batch of frames on previously generated content. However, recent work has shown that such pipelines suffer from severe temporal drift, where errors accumulate and amplify over long horizons. We hypothesize that this drift does not primarily stem from insufficient model capacity, but rather from inference-time error propagation. Specifically, we contend that drift arises from the uncontrolled reuse of corrupted latent conditioning tokens during auto-regressive inference. To correct this accumulation of errors, we propose a simple, inference-time method that mitigates temporal drift by identifying and removing unstable latent tokens before they are reused for conditioning. For this purpose, we define unstable tokens as latent tokens whose representations deviate significantly from those of the previously generated batch, indicating potential corruption or semantic drift. By explicitly removing corrupted latent tokens from the auto-regressive context, rather than modifying entire spatial regions or model parameters, our method prevents unreliable latent information from influencing future generation steps. As a result, it significantly improves long-horizon temporal consistency without modifying the model architecture, training procedure, or leaving latent space.