< Explain other AI papers

Exploring Hallucination of Large Multimodal Models in Video Understanding: Benchmark, Analysis and Mitigation

Hongcheng Gao, Jiashu Qu, Jingyi Tang, Baolong Bi, Yue Liu, Hongyu Chen, Li Liang, Li Su, Qingming Huang

2025-03-26

Exploring Hallucination of Large Multimodal Models in Video
  Understanding: Benchmark, Analysis and Mitigation

Summary

This paper looks at how AI models that understand both images and text sometimes make things up when analyzing videos.

What's the problem?

AI models can give answers that seem correct but are actually wrong, which makes them unreliable, especially when dealing with videos.

What's the solution?

The researchers created a way to test how often these AI models make things up when watching videos. They then used this test to study what causes these errors and developed a method to reduce them by improving the AI's reasoning skills.

Why it matters?

This work matters because it helps make AI models more trustworthy when analyzing videos, which is important for applications like video search and content moderation.

Abstract

The hallucination of large multimodal models (LMMs), providing responses that appear correct but are actually incorrect, limits their reliability and applicability. This paper aims to study the hallucination problem of LMMs in video modality, which is dynamic and more challenging compared to static modalities like images and text. From this motivation, we first present a comprehensive benchmark termed HAVEN for evaluating hallucinations of LMMs in video understanding tasks. It is built upon three dimensions, i.e., hallucination causes, hallucination aspects, and question formats, resulting in 6K questions. Then, we quantitatively study 7 influential factors on hallucinations, e.g., duration time of videos, model sizes, and model reasoning, via experiments of 16 LMMs on the presented benchmark. In addition, inspired by recent thinking models like OpenAI o1, we propose a video-thinking model to mitigate the hallucinations of LMMs via supervised reasoning fine-tuning (SRFT) and direct preference optimization (TDPO)-- where SRFT enhances reasoning capabilities while TDPO reduces hallucinations in the thinking process. Extensive experiments and analyses demonstrate the effectiveness. Remarkably, it improves the baseline by 7.65% in accuracy on hallucination evaluation and reduces the bias score by 4.5%. The code and data are public at https://github.com/Hongcheng-Gao/HAVEN.