< Explain other AI papers

The Synergy between Data and Multi-Modal Large Language Models: A Survey from Co-Development Perspective

Zhen Qin, Daoyuan Chen, Wenhao Zhang, Liuyi Yao, Yilun Huang, Bolin Ding, Yaliang Li, Shuiguang Deng

2024-07-13

The Synergy between Data and Multi-Modal Large Language Models: A Survey from Co-Development Perspective

Summary

This paper explores the relationship between data and multi-modal large language models (MLLMs), highlighting how improvements in data quality and quantity enhance the capabilities of these models, and vice versa. It emphasizes that the development of models and data is interconnected.

What's the problem?

As large language models (LLMs) evolve, there is a growing need for high-quality data to train them effectively. However, the relationship between the data used and the performance of these models is often not well understood. Many researchers treat model development and data collection as separate processes, which can limit advancements in both areas.

What's the solution?

The authors propose a co-development approach, suggesting that improvements in MLLMs can lead to better data collection methods, while high-quality data can enhance model performance. They review existing research to identify how specific types of data can be used at different stages of model development to improve capabilities. Additionally, they outline how MLLMs can help in creating better datasets by leveraging their strengths in understanding and generating multimodal information.

Why it matters?

This research is important because it provides insights into how to effectively advance both AI models and the data they rely on. By understanding the synergy between data and MLLMs, researchers can create more powerful and versatile AI systems that can handle a broader range of tasks, ultimately leading to better applications in fields like healthcare, education, and entertainment.

Abstract

The rapid development of large language models (LLMs) has been witnessed in recent years. Based on the powerful LLMs, multi-modal LLMs (MLLMs) extend the modality from text to a broader spectrum of domains, attracting widespread attention due to the broader range of application scenarios. As LLMs and MLLMs rely on vast amounts of model parameters and data to achieve emergent capabilities, the importance of data is receiving increasingly widespread attention and recognition. Tracing and analyzing recent data-oriented works for MLLMs, we find that the development of models and data is not two separate paths but rather interconnected. On the one hand, vaster and higher-quality data contribute to better performance of MLLMs, on the other hand, MLLMs can facilitate the development of data. The co-development of multi-modal data and MLLMs requires a clear view of 1) at which development stage of MLLMs can specific data-centric approaches be employed to enhance which capabilities, and 2) by utilizing which capabilities and acting as which roles can models contribute to multi-modal data. To promote the data-model co-development for MLLM community, we systematically review existing works related to MLLMs from the data-model co-development perspective. A regularly maintained project associated with this survey is accessible at https://github.com/modelscope/data-juicer/blob/main/docs/awesome_llm_data.md.