LOKI: A Comprehensive Synthetic Data Detection Benchmark using Large Multimodal Models
Junyan Ye, Baichuan Zhou, Zilong Huang, Junan Zhang, Tianyi Bai, Hengrui Kang, Jun He, Honglin Lin, Zihao Wang, Tong Wu, Zhizheng Wu, Yiping Chen, Dahua Lin, Conghui He, Weijia Li
2024-10-15
Summary
This paper introduces LOKI, a new benchmark designed to evaluate how well large multimodal models (LMMs) can detect synthetic data across various formats like images, videos, and text.
What's the problem?
As AI-generated content becomes more common, it is getting harder to tell the difference between real and fake data. This is a problem because fake data can mislead people and affect decision-making. Current methods for detecting synthetic data are not comprehensive enough, and there is a need for better tools to assess LMMs' abilities in this area.
What's the solution?
LOKI provides a structured way to test LMMs by including a wide range of data types (video, image, 3D, text, and audio) and creating 18,000 questions that cover different difficulty levels. It features various types of tasks, such as multiple-choice questions and anomaly detection, allowing for a thorough evaluation of how well these models can identify synthetic data. The benchmark has been tested on both open-source and closed-source models to see how effectively they can detect fake content.
Why it matters?
This research is significant because it helps improve the tools available for detecting AI-generated content, which is crucial in maintaining trust in information online. By evaluating LMMs with LOKI, developers can better understand their strengths and weaknesses in identifying synthetic data, leading to advancements in AI safety and reliability.
Abstract
With the rapid development of AI-generated content, the future internet may be inundated with synthetic data, making the discrimination of authentic and credible multimodal data increasingly challenging. Synthetic data detection has thus garnered widespread attention, and the performance of large multimodal models (LMMs) in this task has attracted significant interest. LMMs can provide natural language explanations for their authenticity judgments, enhancing the explainability of synthetic content detection. Simultaneously, the task of distinguishing between real and synthetic data effectively tests the perception, knowledge, and reasoning capabilities of LMMs. In response, we introduce LOKI, a novel benchmark designed to evaluate the ability of LMMs to detect synthetic data across multiple modalities. LOKI encompasses video, image, 3D, text, and audio modalities, comprising 18K carefully curated questions across 26 subcategories with clear difficulty levels. The benchmark includes coarse-grained judgment and multiple-choice questions, as well as fine-grained anomaly selection and explanation tasks, allowing for a comprehensive analysis of LMMs. We evaluated 22 open-source LMMs and 6 closed-source models on LOKI, highlighting their potential as synthetic data detectors and also revealing some limitations in the development of LMM capabilities. More information about LOKI can be found at https://opendatalab.github.io/LOKI/