Sailor2: Sailing in South-East Asia with Inclusive Multilingual LLMs
Longxu Dou, Qian Liu, Fan Zhou, Changyu Chen, Zili Wang, Ziqi Jin, Zichen Liu, Tongyao Zhu, Cunxiao Du, Penghui Yang, Haonan Wang, Jiaheng Liu, Yongchi Zhao, Xiachong Feng, Xin Mao, Man Tsung Yeung, Kunat Pipatanakul, Fajri Koto, Min Si Thu, Hynek Kydlíček, Zeyi Liu, Qunshu Lin
2025-02-18
Summary
This paper talks about Sailor2, a new family of AI language models designed specifically for South-East Asian languages. It's like creating a super-smart digital translator that can understand and communicate in many different languages from that region.
What's the problem?
Many AI language models are really good at understanding and generating text in English and a few other major languages, but they often struggle with less common languages, especially those from South-East Asia. This means that people who speak these languages might not be able to use AI tools as effectively as English speakers can.
What's the solution?
The researchers created Sailor2, which comes in three sizes (1B, 8B, and 20B) to fit different needs. They took an existing model called Qwen2.5 and trained it further on a huge amount of text in South-East Asian languages. They also wrote a detailed guide on how to create models like this, covering important steps like choosing the right data, training the model, and testing how well it works.
Why it matters?
This matters because it could help make AI technology more accessible and useful for millions of people in South-East Asia who speak languages that are often overlooked. By creating a model that's really good at these languages and sharing how they did it, the researchers are helping to make AI more inclusive and could inspire others to create similar models for other underserved languages around the world.
Abstract
Sailor2 is a family of cutting-edge multilingual language models for South-East Asian (SEA) languages, available in 1B, 8B, and 20B sizes to suit diverse applications. Building on Qwen2.5, Sailor2 undergoes continuous pre-training on 500B tokens (400B SEA-specific and 100B replay tokens) to support 13 SEA languages while retaining proficiency in Chinese and English. Sailor2-20B model achieves a 50-50 win rate against GPT-4o across SEA languages. We also deliver a comprehensive cookbook on how to develop the multilingual model in an efficient manner, including five key aspects: data curation, pre-training, post-training, model customization and evaluation. We hope that Sailor2 model (Apache 2.0 license) will drive language development in the SEA region, and Sailor2 cookbook will inspire researchers to build more inclusive LLMs for other under-served languages.