< Explain other AI papers

TimeViper: A Hybrid Mamba-Transformer Vision-Language Model for Efficient Long Video Understanding

Boshen Xu, Zihan Xiao, Jiaze Li, Jianzhong Ju, Zhenbo Luo, Jian Luan, Qin Jin

2025-11-21

TimeViper: A Hybrid Mamba-Transformer Vision-Language Model for Efficient Long Video Understanding

Summary

This paper introduces TimeViper, a new artificial intelligence model designed to understand very long videos, like those lasting an hour or more. It combines two different types of neural network building blocks – Mamba and Transformer – to achieve this.

What's the problem?

Understanding long videos is really hard for computers. Traditional methods struggle because they either aren't efficient enough to process all the information, or they lose track of important details over time. The researchers found that when processing videos with these models, the visual information gets repeated a lot as it's passed along to the language part of the model, creating redundancy and making it harder to focus on what’s truly important.

What's the solution?

To solve this, the researchers created a new component called TransV. TransV takes the important visual information from the video and efficiently transfers it into the language model’s instructions, essentially summarizing the visuals. This prevents the repetition problem and allows TimeViper to handle videos with over 10,000 frames. They combined this with a hybrid Mamba-Transformer architecture, leveraging the strengths of both to create a more efficient and powerful model.

Why it matters?

This work is important because it represents a step forward in building AI that can truly understand videos like humans do. Being able to process hour-long videos opens up possibilities for applications like detailed video analysis, improved video search, and more sophisticated AI assistants that can understand and respond to visual content over extended periods. It also provides insights into how to best combine different types of neural network architectures for better performance and understanding.

Abstract

We introduce TimeViper, a hybrid vision-language model designed to tackle challenges of long video understanding. Processing long videos demands both an efficient model architecture and an effective mechanism for handling extended temporal contexts. To this end, TimeViper adopts a hybrid Mamba-Transformer backbone that combines the efficiency of state-space models with the expressivity of attention mechanisms. Through this hybrid design, we reveal the vision-to-text information aggregation phenomenon, where information progressively flows from vision tokens to text tokens across increasing LLM depth, resulting in severe vision token redundancy. Motivated by this observation, we propose TransV, a token information transfer module that transfers and compresses vision tokens into instruction tokens while maintaining multimodal understanding capabilities. This design enables TimeViper to process hour-long videos exceeding 10,000 frames. Extensive experiments across multiple benchmarks demonstrate that TimeViper competes with state-of-the-art models while extending frame numbers. We further analyze attention behaviors of both Mamba and Transformer layers, offering new insights into hybrid model interpretability. This work represents an initial step towards developing, interpreting, and compressing hybrid Mamba-Transformer architectures.