< Explain other AI papers

VideoGLaMM: A Large Multimodal Model for Pixel-Level Visual Grounding in Videos

Shehan Munasinghe, Hanan Gani, Wenqi Zhu, Jiale Cao, Eric Xing, Fahad Shahbaz Khan, Salman Khan

2024-11-08

VideoGLaMM: A Large Multimodal Model for Pixel-Level Visual Grounding in Videos

Summary

This paper presents VideoGLaMM, a large multimodal model designed to accurately link videos with text by grounding information at the pixel level, making it easier to understand and generate video content based on user input.

What's the problem?

Understanding and generating videos based on text prompts is complicated because videos contain a lot of details that change over time and space. Existing models struggle to connect specific parts of a video with the corresponding text, especially when it comes to fine details like objects or actions happening in the video.

What's the solution?

VideoGLaMM addresses this problem by combining three main components: a Large Language Model (LLM) for processing text, a dual vision encoder that captures both spatial (how things are arranged) and temporal (how things change over time) information, and a spatio-temporal decoder that generates precise masks for objects in the video. This setup allows the model to effectively follow instructions and generate detailed responses that align closely with the visual content of the videos. The researchers also created a large dataset of video-QA pairs to train and evaluate the model's performance.

Why it matters?

This research is important because it enhances how AI can understand and generate video content based on natural language. By improving the connection between text and video, VideoGLaMM can be used in various applications, such as creating educational content, improving accessibility for visually impaired users, or enhancing interactive entertainment experiences.

Abstract

Fine-grained alignment between videos and text is challenging due to complex spatial and temporal dynamics in videos. Existing video-based Large Multimodal Models (LMMs) handle basic conversations but struggle with precise pixel-level grounding in videos. To address this, we introduce VideoGLaMM, a LMM designed for fine-grained pixel-level grounding in videos based on user-provided textual inputs. Our design seamlessly connects three key components: a Large Language Model, a dual vision encoder that emphasizes both spatial and temporal details, and a spatio-temporal decoder for accurate mask generation. This connection is facilitated via tunable V-L and L-V adapters that enable close Vision-Language (VL) alignment. The architecture is trained to synchronize both spatial and temporal elements of video content with textual instructions. To enable fine-grained grounding, we curate a multimodal dataset featuring detailed visually-grounded conversations using a semiautomatic annotation pipeline, resulting in a diverse set of 38k video-QA triplets along with 83k objects and 671k masks. We evaluate VideoGLaMM on three challenging tasks: Grounded Conversation Generation, Visual Grounding, and Referring Video Segmentation. Experimental results show that our model consistently outperforms existing approaches across all three tasks.