< Explain other AI papers

Divide, then Ground: Adapting Frame Selection to Query Types for Long-Form Video Understanding

Jialuo Li, Bin Li, Jiahao Li, Yan Lu

2025-12-04

Divide, then Ground: Adapting Frame Selection to Query Types for Long-Form Video Understanding

Summary

This paper investigates how to best feed long videos into artificial intelligence models that process both text and visual information, specifically focusing on making the process more efficient.

What's the problem?

Large AI models struggle with long videos because processing every single frame is too computationally expensive and they can only handle a limited amount of information at once. Current solutions try to intelligently pick the most important frames based on what the user is asking, but this frame selection process itself takes a lot of computing power.

What's the solution?

The researchers realized that not all questions require the same level of detailed frame selection. They categorized questions into two types: 'global' questions that need a general understanding of the whole video, and 'localized' questions that focus on specific moments. They developed a system called DIG that automatically uses simple, efficient frame sampling for global questions, and only activates a more complex frame selection process when a localized question demands it. This system doesn't require any additional training, it just adapts based on the type of question asked.

Why it matters?

This work is important because it offers a way to significantly improve the performance of AI models on long videos without requiring massive amounts of computing power. By smartly choosing when to use complex frame selection, it makes long-form video understanding more practical and accessible, allowing AI to better understand and respond to questions about videos.

Abstract

The application of Large Multimodal Models (LMMs) to long-form video understanding is constrained by limited context lengths and the computationally prohibitive cost of processing dense video tokens. Consequently, recent research has focused on query-aware frame selection, methods that often incur significant computational overhead. This paper challenges the assumption that such complex search mechanisms are universally necessary. We first identify and validate a query typology distinguishing between global query and localized query. We demonstrate that while uniform sampling is both effective and efficient for global queries, localized queries indeed necessitate query-aware selection for optimal performance. Building on this insight, we propose DIG, a training-free frame selection framework that adapts its strategy based on the query type. Specifically,DIG employs efficient uniform sampling for global queries while activating a specialized pipeline to extract query-relevant frames for localized queries. Experiments on three long-form video understanding benchmarks demonstrate that DIG consistently outperforms existing baselines and robustly improves LMM performance, even when scaling the input frame count to 256.