< Explain other AI papers

VideoChat-R1: Enhancing Spatio-Temporal Perception via Reinforcement Fine-Tuning

Xinhao Li, Ziang Yan, Desen Meng, Lu Dong, Xiangyu Zeng, Yinan He, Yali Wang, Yu Qiao, Yi Wang, Limin Wang

2025-04-10

VideoChat-R1: Enhancing Spatio-Temporal Perception via Reinforcement
  Fine-Tuning

Summary

This paper talks about VideoChat-R1, a smart AI system that gets better at understanding videos by practicing specific tasks and getting feedback, like a student learning through trial and error.

What's the problem?

Current AI models for video analysis aren’t great at tracking moving objects or understanding when things happen in videos, and methods that work for text or images don’t translate well to videos.

What's the solution?

VideoChat-R1 uses a special training method where it practices tasks like spotting objects in motion or figuring out event timing, gets graded on its answers, and improves based on feedback, all while keeping its general chat skills intact.

Why it matters?

This helps AI systems analyze videos more accurately for things like security footage review, sports analysis, or helping robots navigate, without needing tons of extra data or losing their ability to chat naturally.

Abstract

Recent advancements in reinforcement learning have significantly advanced the reasoning capabilities of multimodal large language models (MLLMs). While approaches such as Group Relative Policy Optimization (GRPO) and rule-based reward mechanisms demonstrate promise in text and image domains, their application to video understanding remains limited. This paper presents a systematic exploration of Reinforcement Fine-Tuning (RFT) with GRPO for video MLLMs, aiming to enhance spatio-temporal perception while maintaining general capabilities. Our experiments reveal that RFT is highly data-efficient for task-specific improvements. Through multi-task RFT on spatio-temporal perception objectives with limited samples, we develop VideoChat-R1, a powerful video MLLM that achieves state-of-the-art performance on spatio-temporal perception tasks without sacrificing chat ability, while exhibiting emerging spatio-temporal reasoning abilities. Compared to Qwen2.5-VL-7B, VideoChat-R1 boosts performance several-fold in tasks like temporal grounding (+31.8) and object tracking (+31.2). Additionally, it significantly improves on general QA benchmarks such as VideoMME (+0.9), MVBench (+1.0), and Perception Test (+0.9). Our findings underscore the potential of RFT for specialized task enhancement of Video MLLMs. We hope our work offers valuable insights for future RL research in video MLLMs.