Personalized Multimodal Large Language Models: A Survey
Junda Wu, Hanjia Lyu, Yu Xia, Zhehao Zhang, Joe Barrow, Ishita Kumar, Mehrnoosh Mirtaheri, Hongjie Chen, Ryan A. Rossi, Franck Dernoncourt, Tong Yu, Ruiyi Zhang, Jiuxiang Gu, Nesreen K. Ahmed, Yu Wang, Xiang Chen, Hanieh Deilamsalehy, Namyong Park, Sungchul Kim, Huanrui Yang, Subrata Mitra, Zhengmian Hu
2024-12-06

Summary
This paper talks about personalized multimodal large language models (MLLMs), which are advanced AI systems that can understand and generate information from different types of data, like text, images, and audio, tailored to individual users.
What's the problem?
As MLLMs become more popular, there is a need to personalize them for different users. However, existing methods for personalizing these models often lack a clear structure and can be inefficient, making it hard to meet the specific needs of users across various applications.
What's the solution?
The authors provide a comprehensive survey of techniques used to personalize MLLMs. They categorize these techniques into an intuitive taxonomy and discuss how they can be combined or adapted for better performance. The paper also summarizes the tasks related to personalization that have been studied, the evaluation metrics used to measure success, and the datasets that can help benchmark these models.
Why it matters?
This research is important because it helps researchers and developers understand how to create more effective personalized AI systems. By improving how MLLMs are tailored to individual users, we can enhance their performance in real-world applications, making technology more useful and accessible for everyone.
Abstract
Multimodal Large Language Models (MLLMs) have become increasingly important due to their state-of-the-art performance and ability to integrate multiple data modalities, such as text, images, and audio, to perform complex tasks with high accuracy. This paper presents a comprehensive survey on personalized multimodal large language models, focusing on their architecture, training methods, and applications. We propose an intuitive taxonomy for categorizing the techniques used to personalize MLLMs to individual users, and discuss the techniques accordingly. Furthermore, we discuss how such techniques can be combined or adapted when appropriate, highlighting their advantages and underlying rationale. We also provide a succinct summary of personalization tasks investigated in existing research, along with the evaluation metrics commonly used. Additionally, we summarize the datasets that are useful for benchmarking personalized MLLMs. Finally, we outline critical open challenges. This survey aims to serve as a valuable resource for researchers and practitioners seeking to understand and advance the development of personalized multimodal large language models.