BIMBA: Selective-Scan Compression for Long-Range Video Question Answering
Md Mohaiminul Islam, Tushar Nagarajan, Huiyu Wang, Gedas Bertasius, Lorenzo Torresani
2025-03-13
Summary
This paper talks about BIMBA, a smarter way for AI to answer questions about long videos by focusing only on important parts instead of processing every frame, saving time and computer power.
What's the problem?
Current AI struggles with long videos because analyzing every frame takes too much memory and processing, often missing key moments or details.
What's the solution?
BIMBA uses a smart scanning method to pick out the most important video moments, compressing them into a shorter summary that keeps the key info for answering questions.
Why it matters?
This makes AI video analysis faster and cheaper, helping tools like security systems or video tutors work better with long recordings without needing supercomputers.
Abstract
Video Question Answering (VQA) in long videos poses the key challenge of extracting relevant information and modeling long-range dependencies from many redundant frames. The self-attention mechanism provides a general solution for sequence modeling, but it has a prohibitive cost when applied to a massive number of spatiotemporal tokens in long videos. Most prior methods rely on compression strategies to lower the computational cost, such as reducing the input length via sparse frame sampling or compressing the output sequence passed to the large language model (LLM) via space-time pooling. However, these naive approaches over-represent redundant information and often miss salient events or fast-occurring space-time patterns. In this work, we introduce BIMBA, an efficient state-space model to handle long-form videos. Our model leverages the selective scan algorithm to learn to effectively select critical information from high-dimensional video and transform it into a reduced token sequence for efficient LLM processing. Extensive experiments demonstrate that BIMBA achieves state-of-the-art accuracy on multiple long-form VQA benchmarks, including PerceptionTest, NExT-QA, EgoSchema, VNBench, LongVideoBench, and Video-MME. Code, and models are publicly available at https://sites.google.com/view/bimba-mllm.