< Explain other AI papers

Caption Anything in Video: Fine-grained Object-centric Captioning via Spatiotemporal Multimodal Prompting

Yunlong Tang, Jing Bi, Chao Huang, Susan Liang, Daiki Shimada, Hang Hua, Yunzhong Xiao, Yizhi Song, Pinxin Liu, Mingqian Feng, Junjia Guo, Zhuo Liu, Luchuan Song, Ali Vosoughi, Jinxi He, Liu He, Zeliang Zhang, Jiebo Luo, Chenliang Xu

2025-04-10

Caption Anything in Video: Fine-grained Object-centric Captioning via
  Spatiotemporal Multimodal Prompting

Summary

This paper talks about CAT-V, a tool that creates super-detailed descriptions of any object in a video, like explaining what a specific person or thing is doing over time.

What's the problem?

Current video caption tools either give vague summaries or can’t focus on specific objects, missing details like how something moves or changes in a scene.

What's the solution?

CAT-V uses three smart parts: a segmenter to track objects anywhere in the video, a time analyzer to spot changes, and a captioner that writes detailed descriptions using clues from both, all without needing new training.

Why it matters?

This helps create better video subtitles for accessibility, lets people ask questions about specific objects in videos, and improves AI tools for editing or analyzing video content.

Abstract

We present CAT-V (Caption AnyThing in Video), a training-free framework for fine-grained object-centric video captioning that enables detailed descriptions of user-selected objects through time. CAT-V integrates three key components: a Segmenter based on SAMURAI for precise object segmentation across frames, a Temporal Analyzer powered by TRACE-Uni for accurate event boundary detection and temporal analysis, and a Captioner using InternVL-2.5 for generating detailed object-centric descriptions. Through spatiotemporal visual prompts and chain-of-thought reasoning, our framework generates detailed, temporally-aware descriptions of objects' attributes, actions, statuses, interactions, and environmental contexts without requiring additional training data. CAT-V supports flexible user interactions through various visual prompts (points, bounding boxes, and irregular regions) and maintains temporal sensitivity by tracking object states and interactions across different time segments. Our approach addresses limitations of existing video captioning methods, which either produce overly abstract descriptions or lack object-level precision, enabling fine-grained, object-specific descriptions while maintaining temporal coherence and spatial accuracy. The GitHub repository for this project is available at https://github.com/yunlong10/CAT-V