< Explain other AI papers

Attend Before Attention: Efficient and Scalable Video Understanding via Autoregressive Gazing

Baifeng Shi, Stephanie Fu, Long Lian, Hanrong Ye, David Eigen, Aaron Reite, Boyi Li, Jan Kautz, Song Han, David M. Chan, Pavlo Molchanov, Trevor Darrell, Hongxu Yin

2026-03-25

Attend Before Attention: Efficient and Scalable Video Understanding via Autoregressive Gazing

Summary

This paper introduces a new method called AutoGaze to help computers understand long, detailed videos more efficiently. It focuses on improving how well 'multi-modal large language models' – which combine vision and language processing – can analyze video content.

What's the problem?

Current AI models struggle with long, high-resolution videos because they treat every single part of the video equally, even though much of it is repetitive or unimportant. This takes a lot of computing power and slows down the analysis. Imagine trying to describe a whole movie frame by frame – it’s inefficient! The sheer amount of visual information overwhelms the system.

What's the solution?

AutoGaze is a small addition to existing AI models that intelligently selects only the *most important* parts of a video to focus on. It’s like a smart zoom that automatically highlights the key areas and actions. It learns what’s important by predicting what comes next in the video and using a reward system to refine its selections, ensuring it doesn’t miss crucial information while drastically reducing the amount of data the AI needs to process. This allows the AI to work much faster and handle much larger videos.

Why it matters?

This work is important because it allows AI to process much longer and more detailed videos, opening up possibilities for applications like better video understanding, more accurate video question answering, and improved analysis of real-world footage. The researchers even created a new, challenging benchmark for testing these kinds of AI models using 4K, 5-minute videos, demonstrating that their method significantly outperforms existing approaches.

Abstract

Multi-modal large language models (MLLMs) have advanced general-purpose video understanding but struggle with long, high-resolution videos -- they process every pixel equally in their vision transformers (ViTs) or LLMs despite significant spatiotemporal redundancy. We introduce AutoGaze, a lightweight module that removes redundant patches before processed by a ViT or an MLLM. Trained with next-token prediction and reinforcement learning, AutoGaze autoregressively selects a minimal set of multi-scale patches that can reconstruct the video within a user-specified error threshold, eliminating redundancy while preserving information. Empirically, AutoGaze reduces visual tokens by 4x-100x and accelerates ViTs and MLLMs by up to 19x, enabling scaling MLLMs to 1K-frame 4K-resolution videos and achieving superior results on video benchmarks (e.g., 67.0% on VideoMME). Furthermore, we introduce HLVid: the first high-resolution, long-form video QA benchmark with 5-minute 4K-resolution videos, where an MLLM scaled with AutoGaze improves over the baseline by 10.1% and outperforms the previous best MLLM by 4.5%. Project page: https://autogaze.github.io/.