< Explain other AI papers

QuoTA: Query-oriented Token Assignment via CoT Query Decouple for Long Video Comprehension

Yongdong Luo, Wang Chen, Xiawu Zheng, Weizhong Huang, Shukang Yin, Haojia Lin, Chaoyou Fu, Jinfa Huang, Jiayi Ji, Jiebo Luo, Rongrong Ji

2025-03-12

QuoTA: Query-oriented Token Assignment via CoT Query Decouple for Long
  Video Comprehension

Summary

This paper talks about QuoTA, a tool that helps AI focus on the most important parts of long videos by picking key frames based on what the user is asking, making video understanding faster and smarter.

What's the problem?

Current AI video tools waste time processing unimportant video parts because they don’t check if frames match the user’s question before analyzing them.

What's the solution?

QuoTA uses a question-splitting method (Chain-of-Thoughts) to figure out which frames matter most to the query, then assigns visual tokens only to those frames before the AI starts processing.

Why it matters?

This makes AI video analysis quicker and more accurate for tasks like summarizing lectures or finding specific moments in surveillance footage, saving time and energy.

Abstract

Recent advances in long video understanding typically mitigate visual redundancy through visual token pruning based on attention distribution. However, while existing methods employ post-hoc low-response token pruning in decoder layers, they overlook the input-level semantic correlation between visual tokens and instructions (query). In this paper, we propose QuoTA, an ante-hoc training-free modular that extends existing large video-language models (LVLMs) for visual token assignment based on query-oriented frame-level importance assessment. The query-oriented token selection is crucial as it aligns visual processing with task-specific requirements, optimizing token budget utilization while preserving semantically relevant content. Specifically, (i) QuoTA strategically allocates frame-level importance scores based on query relevance, enabling one-time visual token assignment before cross-modal interactions in decoder layers, (ii) we decouple the query through Chain-of-Thoughts reasoning to facilitate more precise LVLM-based frame importance scoring, and (iii) QuoTA offers a plug-and-play functionality that extends to existing LVLMs. Extensive experimental results demonstrate that implementing QuoTA with LLaVA-Video-7B yields an average performance improvement of 3.2% across six benchmarks (including Video-MME and MLVU) while operating within an identical visual token budget as the baseline. Codes are open-sourced at https://github.com/MAC-AutoML/QuoTA.