< Explain other AI papers

Video Reality Test: Can AI-Generated ASMR Videos fool VLMs and Humans?

Jiaqi Wang, Weijia Wu, Yi Zhan, Rui Zhao, Ming Hu, James Cheng, Wei Liu, Philip Torr, Kevin Qinghong Lin

2025-12-17

Video Reality Test: Can AI-Generated ASMR Videos fool VLMs and Humans?

Summary

This paper investigates how realistic AI-generated videos are, specifically focusing on whether they can convincingly fool both humans and advanced AI systems called Visual Language Models (VLMs). It highlights a growing concern as video generation technology rapidly improves, making it harder to distinguish between real and fake content.

What's the problem?

Currently, most tests for detecting AI-generated videos don't consider sound, look at very general video types, and simply try to classify a video as real or fake. This research points out that we don't really know if the newest AI video generators can create videos with both realistic visuals *and* sound that can truly trick people or even sophisticated AI. The problem is that existing benchmarks aren't challenging enough to reveal the true capabilities of these models.

What's the solution?

The researchers created a new benchmark called 'Video Reality Test' using real ASMR videos – videos designed to create a tingling sensation through sound and visuals. They set up a competition where AI models tried to *create* fake videos that would fool human reviewers and other AI models (VLMs) acting as detectors. The VLMs and humans then had to decide which videos were real and which were AI-generated. They specifically tested how much adding sound helped with detection and if simple things like watermarks could throw the AI off.

Why it matters?

The findings show that even the best AI video generators can still fool many VLMs, and even humans aren't perfect at spotting fakes. This is important because it shows the current limits of AI video realism and highlights that VLMs aren't as good as humans at understanding the subtle details of audio-visual consistency. It emphasizes the need for better detection methods as AI-generated videos become more prevalent and potentially misleading.

Abstract

Recent advances in video generation have produced vivid content that are often indistinguishable from real videos, making AI-generated video detection an emerging societal challenge. Prior AIGC detection benchmarks mostly evaluate video without audio, target broad narrative domains, and focus on classification solely. Yet it remains unclear whether state-of-the-art video generation models can produce immersive, audio-paired videos that reliably deceive humans and VLMs. To this end, we introduce Video Reality Test, an ASMR-sourced video benchmark suite for testing perceptual realism under tight audio-visual coupling, featuring the following dimensions: (i) Immersive ASMR video-audio sources. Built on carefully curated real ASMR videos, the benchmark targets fine-grained action-object interactions with diversity across objects, actions, and backgrounds. (ii) Peer-Review evaluation. An adversarial creator-reviewer protocol where video generation models act as creators aiming to fool reviewers, while VLMs serve as reviewers seeking to identify fakeness. Our experimental findings show: The best creator Veo3.1-Fast even fools most VLMs: the strongest reviewer (Gemini 2.5-Pro) achieves only 56\% accuracy (random 50\%), far below that of human experts (81.25\%). Adding audio improves real-fake discrimination, yet superficial cues such as watermarks can still significantly mislead models. These findings delineate the current boundary of video generation realism and expose limitations of VLMs in perceptual fidelity and audio-visual consistency. Our code is available at https://github.com/video-reality-test/video-reality-test.