Rethinking Multilingual Continual Pretraining: Data Mixing for Adapting LLMs Across Languages and Resources
Zihao Li, Shaoxiong Ji, Hengyu Luo, Jörg Tiedemann
2025-04-08
Summary
This paper talks about improving AI language models' ability to work across many languages by carefully mixing different types of training data, like adding code snippets or pairing languages together.
What's the problem?
Current AI models work much better for common languages like English than rare ones, and adding new languages often causes problems like mixing up words between languages.
What's the solution?
Researchers tested different training recipes and found that mixing code with language data helps rare languages learn better, while smart pairing of languages reduces mix-ups during text generation.
Why it matters?
This helps create fairer AI tools that work well for all languages, improving translation apps and chatbots for people who speak less common languages.
Abstract
Large Language Models (LLMs) exhibit significant disparities in performance across languages, primarily benefiting high-resource languages while marginalizing underrepresented ones. Continual Pretraining (CPT) has emerged as a promising approach to address this imbalance, although the relative effectiveness of monolingual, bilingual, and code-augmented data strategies remains unclear. This study systematically evaluates 36 CPT configurations involving three multilingual base models, across 30+ languages categorized as altruistic, selfish, and stagnant, spanning various resource levels. Our findings reveal three major insights: (1) Bilingual CPT improves multilingual classification but often causes language mixing issues during generation. (2) Including programming code data during CPT consistently enhances multilingual classification accuracy, particularly benefiting low-resource languages, but introduces a trade-off by slightly degrading generation quality. (3) Contrary to prior work, we observe substantial deviations from language classifications according to their impact on cross-lingual transfer: Languages classified as altruistic often negatively affect related languages, selfish languages show conditional and configuration-dependent behavior, and stagnant languages demonstrate surprising adaptability under certain CPT conditions. These nuanced interactions emphasize the complexity of multilingual representation learning, underscoring the importance of systematic studies on generalizable language classification to inform future multilingual CPT strategies.