< Explain other AI papers

TimeChat-Captioner: Scripting Multi-Scene Videos with Time-Aware and Structural Audio-Visual Captions

Linli Yao, Yuancheng Wei, Yaojie Zhang, Lei Li, Xinlong Chen, Feifan Song, Ziyue Wang, Kun Ouyang, Yuanxin Liu, Lingpeng Kong, Qi Liu, Pengfei Wan, Kun Gai, Yuanxing Zhang, Xu Sun

2026-02-12

TimeChat-Captioner: Scripting Multi-Scene Videos with Time-Aware and Structural Audio-Visual Captions

Summary

This paper introduces a new way to automatically describe videos, going beyond simple captions to create detailed, almost script-like narratives that include specific timestamps for what's happening when.

What's the problem?

Existing video descriptions are often too general and don't provide enough detail to really understand what's going on in a scene, or when things happen within the video. It's hard to get a complete picture of the events just from a typical caption, and there wasn't a good way to measure how well a description actually captured all the important details and timing.

What's the solution?

The researchers developed a system called Omni Dense Captioning. This system creates descriptions using a structured format with six key elements, like who is involved, what they're doing, and where it's happening, all tied to specific moments in the video. They also created a new dataset, TimeChatCap-42K, to train and test these systems, and a new way to evaluate the quality of the descriptions, called SodaM. Finally, they built a model, TimeChat-Captioner-7B, that performs really well at this task, even better than Google’s Gemini-2.5-Pro.

Why it matters?

This work is important because more detailed and accurate video descriptions can help computers better understand what's happening in videos. This has a lot of potential applications, like improving video search, helping people with visual impairments, and enabling robots to understand and interact with the world around them. The improved descriptions also make it easier for AI to reason about videos and understand events over time.

Abstract

This paper proposes Omni Dense Captioning, a novel task designed to generate continuous, fine-grained, and structured audio-visual narratives with explicit timestamps. To ensure dense semantic coverage, we introduce a six-dimensional structural schema to create "script-like" captions, enabling readers to vividly imagine the video content scene by scene, akin to a cinematographic screenplay. To facilitate research, we construct OmniDCBench, a high-quality, human-annotated benchmark, and propose SodaM, a unified metric that evaluates time-aware detailed descriptions while mitigating scene boundary ambiguity. Furthermore, we construct a training dataset, TimeChatCap-42K, and present TimeChat-Captioner-7B, a strong baseline trained via SFT and GRPO with task-specific rewards. Extensive experiments demonstrate that TimeChat-Captioner-7B achieves state-of-the-art performance, surpassing Gemini-2.5-Pro, while its generated dense descriptions significantly boost downstream capabilities in audio-visual reasoning (DailyOmni and WorldSense) and temporal grounding (Charades-STA). All datasets, models, and code will be made publicly available at https://github.com/yaolinli/TimeChat-Captioner.