Both Text and Images Leaked! A Systematic Analysis of Multimodal LLM Data Contamination
Dingjie Song, Sicheng Lai, Shunian Chen, Lichao Sun, Benyou Wang
2024-11-07

Summary
This paper discusses a new framework called MM-Detect, designed to identify data contamination in multimodal large language models (MLLMs), which can affect their performance and reliability.
What's the problem?
As multimodal large language models, which can process both text and images, become more popular, there is a growing concern about data contamination. This means that incorrect or unwanted data can sneak into the training process, leading to unreliable results. Existing methods for detecting this contamination are not very effective for MLLMs because they have multiple types of data and training steps.
What's the solution?
The researchers developed MM-Detect, a specialized framework that can detect contamination in MLLMs by analyzing how well these models perform on tasks before and after introducing changes to the data. They tested MM-Detect on various models and found that it could effectively identify different levels of contamination. Additionally, they explored where this contamination might come from, whether during the initial training of the language models or during the specific training for multimodal tasks.
Why it matters?
This research is significant because it helps ensure that multimodal language models are trained with clean and reliable data. By identifying and addressing data contamination, we can improve the accuracy and trustworthiness of these models, which are increasingly used in applications like image recognition, automated customer service, and more.
Abstract
The rapid progression of multimodal large language models (MLLMs) has demonstrated superior performance on various multimodal benchmarks. However, the issue of data contamination during training creates challenges in performance evaluation and comparison. While numerous methods exist for detecting dataset contamination in large language models (LLMs), they are less effective for MLLMs due to their various modalities and multiple training phases. In this study, we introduce a multimodal data contamination detection framework, MM-Detect, designed for MLLMs. Our experimental results indicate that MM-Detect is sensitive to varying degrees of contamination and can highlight significant performance improvements due to leakage of the training set of multimodal benchmarks. Furthermore, We also explore the possibility of contamination originating from the pre-training phase of LLMs used by MLLMs and the fine-tuning phase of MLLMs, offering new insights into the stages at which contamination may be introduced.