< Explain other AI papers

Maya: An Instruction Finetuned Multilingual Multimodal Model

Nahid Alam, Karthik Reddy Kanjula, Surya Guthikonda, Timothy Chung, Bala Krishna S Vegesna, Abhipsha Das, Anthony Susevski, Ryan Sze-Yin Chan, S M Iftekhar Uddin, Shayekh Bin Islam, Roshan Santhosh, Snegha A, Drishti Sharma, Chen Liu, Isha Chaturvedi, Genta Indra Winata, Ashvanth. S, Snehanshu Mukherjee, Alham Fikri Aji

2024-12-10

Maya: An Instruction Finetuned Multilingual Multimodal Model

Summary

This paper talks about Maya, a new open-source model designed to understand and generate text and images in multiple languages, especially focusing on low-resource languages and cultural contexts.

What's the problem?

Most existing vision-language models perform well in widely spoken languages like English but struggle with less common languages and cultural nuances. This is mainly due to a lack of high-quality training data that includes diverse languages and is safe from harmful content. As a result, these models often fail to accurately interpret or respond appropriately in low-resource languages.

What's the solution?

To tackle these issues, the authors developed Maya, which includes three key contributions: first, they created a multilingual dataset for training that covers eight languages; second, they analyzed existing datasets for harmful content and produced a safer version; and third, they built a model that can understand and generate text and images in these languages, improving cultural sensitivity and comprehension in vision-language tasks.

Why it matters?

This research is important because it enhances the capabilities of AI systems to work across different languages and cultures. By improving how models handle low-resource languages, Maya can help make technology more accessible globally, allowing for better communication and understanding in diverse contexts.

Abstract

The rapid development of large Vision-Language Models (VLMs) has led to impressive results on academic benchmarks, primarily in widely spoken languages. However, significant gaps remain in the ability of current VLMs to handle low-resource languages and varied cultural contexts, largely due to a lack of high-quality, diverse, and safety-vetted data. Consequently, these models often struggle to understand low-resource languages and cultural nuances in a manner free from toxicity. To address these limitations, we introduce Maya, an open-source Multimodal Multilingual model. Our contributions are threefold: 1) a multilingual image-text pretraining dataset in eight languages, based on the LLaVA pretraining dataset; 2) a thorough analysis of toxicity within the LLaVA dataset, followed by the creation of a novel toxicity-free version across eight languages; and 3) a multilingual image-text model supporting these languages, enhancing cultural and linguistic comprehension in vision-language tasks. Code available at https://github.com/nahidalam/maya.