UnifiedCrawl: Aggregated Common Crawl for Affordable Adaptation of LLMs on Low-Resource Languages
Bethel Melesse Tessema, Akhil Kedia, Tae-Sun Chung
2024-11-22

Summary
This paper discusses UnifiedCrawl, a method for gathering text data from the Common Crawl corpus to improve large language models (LLMs) for low-resource languages, which often lack sufficient training data.
What's the problem?
Many low-resource languages do not have enough training data available for LLMs, which leads to poor performance when these models try to understand or generate text in those languages. This makes it challenging for speakers of these languages to benefit from advanced AI technologies.
What's the solution?
UnifiedCrawl addresses this issue by efficiently collecting and filtering text data from the entire Common Crawl corpus, resulting in much larger datasets for low-resource languages than previously available. The authors then fine-tune multilingual LLMs using a method called QLoRA, which allows them to adapt these models effectively while using less memory. This approach significantly improves the models' performance on low-resource languages, as shown by better scores in language modeling tasks.
Why it matters?
This research is important because it provides a practical and affordable way to enhance AI capabilities for low-resource languages, helping to bridge the gap between well-supported and less-supported languages. By making these improvements accessible with consumer hardware, UnifiedCrawl can empower more people to use AI tools in their native languages, promoting inclusivity and diversity in technology.
Abstract
Large language models (LLMs) under-perform on low-resource languages due to limited training data. We present a method to efficiently collect text data for low-resource languages from the entire Common Crawl corpus. Our approach, UnifiedCrawl, filters and extracts common crawl using minimal compute resources, yielding mono-lingual datasets much larger than previously available sources. We demonstrate that leveraging this data to fine-tuning multilingual LLMs via efficient adapter methods (QLoRA) significantly boosts performance on the low-resource language, while minimizing VRAM usage. Our experiments show large improvements in language modeling perplexity and an increase in few-shot prompting scores. Our work and released source code provide an affordable approach to improve LLMs for low-resource languages using consumer hardware. Our source code is available here at https://github.com/bethelmelesse/unifiedcrawl.