Multi-Dimensional Insights: Benchmarking Real-World Personalization in Large Multimodal Models
YiFan Zhang, Shanglin Lei, Runqi Qiao, Zhuoma GongQue, Xiaoshuai Song, Guanting Dong, Qiuna Tan, Zhe Wei, Peiqing Yang, Ye Tian, Yadong Xue, Xiaofei Wang, Honggang Zhang
2024-12-18

Summary
This paper talks about the Multi-Dimensional Insights (MDI) benchmark, a new tool designed to evaluate how well large multimodal models (LMMs) can meet the diverse needs of people in real-life situations.
What's the problem?
As large multimodal models have become more advanced, there is a growing need to assess their performance in practical scenarios. However, existing evaluation methods do not fully capture how these models perform across different tasks and for people of various ages. This gap makes it difficult to understand if these models can truly help users in real-world applications.
What's the solution?
The MDI benchmark includes over 500 images that represent six common life scenarios, along with two types of questions for each image: simple questions to test basic understanding and complex questions to evaluate deeper reasoning skills. Additionally, the benchmark categorizes questions based on three age groups—young people, middle-aged people, and older people—to assess how well the models cater to different perspectives and needs. This comprehensive approach allows for a more detailed evaluation of LMMs' capabilities.
Why it matters?
This research is important because it provides a structured way to evaluate advanced AI models in real-world contexts, ensuring they can effectively serve diverse populations. By identifying areas where these models need improvement, the MDI benchmark can help guide future developments in AI technology, making it more personalized and useful for everyone.
Abstract
The rapidly developing field of large multimodal models (LMMs) has led to the emergence of diverse models with remarkable capabilities. However, existing benchmarks fail to comprehensively, objectively and accurately evaluate whether LMMs align with the diverse needs of humans in real-world scenarios. To bridge this gap, we propose the Multi-Dimensional Insights (MDI) benchmark, which includes over 500 images covering six common scenarios of human life. Notably, the MDI-Benchmark offers two significant advantages over existing evaluations: (1) Each image is accompanied by two types of questions: simple questions to assess the model's understanding of the image, and complex questions to evaluate the model's ability to analyze and reason beyond basic content. (2) Recognizing that people of different age groups have varying needs and perspectives when faced with the same scenario, our benchmark stratifies questions into three age categories: young people, middle-aged people, and older people. This design allows for a detailed assessment of LMMs' capabilities in meeting the preferences and needs of different age groups. With MDI-Benchmark, the strong model like GPT-4o achieve 79% accuracy on age-related tasks, indicating that existing LMMs still have considerable room for improvement in addressing real-world applications. Looking ahead, we anticipate that the MDI-Benchmark will open new pathways for aligning real-world personalization in LMMs. The MDI-Benchmark data and evaluation code are available at https://mdi-benchmark.github.io/