< Explain other AI papers

MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens

Anas Awadalla, Le Xue, Oscar Lo, Manli Shu, Hannah Lee, Etash Kumar Guha, Matt Jordan, Sheng Shen, Mohamed Awadalla, Silvio Savarese, Caiming Xiong, Ran Xu, Yejin Choi, Ludwig Schmidt

2024-06-18

MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens

Summary

This paper introduces MINT-1T, a new and massive open-source dataset designed for training large multimodal models that can understand both images and text. It contains one trillion text tokens and three billion images, making it ten times larger than previous datasets.

What's the problem?

As technology advances, there is a growing need for large datasets that combine both text and images to train AI models effectively. However, there aren't enough large-scale, diverse datasets available for researchers to use. This lack of data limits the development of advanced AI systems that can understand and generate content based on multiple types of information.

What's the solution?

To address this issue, the authors created MINT-1T, which is the largest open-source multimodal dataset to date. It includes a wide variety of sources, such as HTML documents, PDFs, and research papers from ArXiv. By collecting and organizing this vast amount of data, they aim to provide a valuable resource for researchers working on multimodal AI models. The paper also discusses the engineering challenges involved in creating such a large dataset and emphasizes the importance of sharing this data with the research community.

Why it matters?

This research is important because it significantly expands the resources available for training AI models that can process both text and images. By providing a larger and more diverse dataset, MINT-1T helps improve the performance of multimodal models, which can lead to better applications in fields like computer vision, natural language processing, and more. This advancement can enhance how AI interacts with information in real-world scenarios.

Abstract

Multimodal interleaved datasets featuring free-form interleaved sequences of images and text are crucial for training frontier large multimodal models (LMMs). Despite the rapid progression of open-source LMMs, there remains a pronounced scarcity of large-scale, diverse open-source multimodal interleaved datasets. In response, we introduce MINT-1T, the most extensive and diverse open-source Multimodal INTerleaved dataset to date. MINT-1T comprises one trillion text tokens and three billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. As scaling multimodal interleaved datasets requires substantial engineering effort, sharing the data curation process and releasing the dataset greatly benefits the community. Our experiments show that LMMs trained on MINT-1T rival the performance of models trained on the previous leading dataset, OBELICS. Our data and code will be released at https://github.com/mlfoundations/MINT-1T.