< Explain other AI papers

Benchmarking Trustworthiness of Multimodal Large Language Models: A Comprehensive Study

Yichi Zhang, Yao Huang, Yitong Sun, Chang Liu, Zhe Zhao, Zhengwei Fang, Yifan Wang, Huanran Chen, Xiao Yang, Xingxing Wei, Hang Su, Yinpeng Dong, Jun Zhu

2024-07-19

Benchmarking Trustworthiness of Multimodal Large Language Models: A Comprehensive Study

Summary

This paper discusses a new benchmark called MultiTrust, which is designed to evaluate the trustworthiness of Multimodal Large Language Models (MLLMs) across five important areas: truthfulness, safety, robustness, fairness, and privacy.

What's the problem?

Even though MLLMs are powerful and can handle various tasks, they have significant trustworthiness issues. Current research does not provide a complete evaluation of these models, making it difficult to identify how they can be improved. This lack of understanding can lead to problems when these models are used in real-world applications.

What's the solution?

The authors created the MultiTrust benchmark to assess MLLMs more thoroughly. They conducted extensive experiments with 21 different MLLMs and discovered previously unknown trustworthiness issues. Their approach includes evaluating the models across 32 tasks and using self-curated datasets to ensure a comprehensive analysis. They found that many models struggle with interpreting confusing images and are prone to privacy breaches and biases, especially when using irrelevant images.

Why it matters?

This research is crucial because it helps improve the reliability of AI systems that use MLLMs. By identifying trustworthiness issues and providing a framework for evaluation, this work lays the groundwork for developing safer and more effective AI technologies that can be trusted in sensitive areas like healthcare, finance, and education.

Abstract

Despite the superior capabilities of Multimodal Large Language Models (MLLMs) across diverse tasks, they still face significant trustworthiness challenges. Yet, current literature on the assessment of trustworthy MLLMs remains limited, lacking a holistic evaluation to offer thorough insights into future improvements. In this work, we establish MultiTrust, the first comprehensive and unified benchmark on the trustworthiness of MLLMs across five primary aspects: truthfulness, safety, robustness, fairness, and privacy. Our benchmark employs a rigorous evaluation strategy that addresses both multimodal risks and cross-modal impacts, encompassing 32 diverse tasks with self-curated datasets. Extensive experiments with 21 modern MLLMs reveal some previously unexplored trustworthiness issues and risks, highlighting the complexities introduced by the multimodality and underscoring the necessity for advanced methodologies to enhance their reliability. For instance, typical proprietary models still struggle with the perception of visually confusing images and are vulnerable to multimodal jailbreaking and adversarial attacks; MLLMs are more inclined to disclose privacy in text and reveal ideological and cultural biases even when paired with irrelevant images in inference, indicating that the multimodality amplifies the internal risks from base LLMs. Additionally, we release a scalable toolbox for standardized trustworthiness research, aiming to facilitate future advancements in this important field. Code and resources are publicly available at: https://multi-trust.github.io/.