Shiksha: A Technical Domain focused Translation Dataset and Model for Indian Languages
Advait Joglekar, Srinivasan Umesh
2024-12-13
Summary
This paper talks about Shiksha, a new dataset and model designed to improve translation between English and various Indian languages, particularly in scientific and technical fields.
What's the problem?
Most translation models struggle with scientific and technical language, especially when translating into less commonly used Indian languages. This is partly because there aren't enough high-quality datasets focused on these areas, making it hard for models to learn and perform well.
What's the solution?
To address this issue, the authors created a large dataset containing over 2.8 million high-quality translation pairs between English and eight Indian languages. They gathered this data by mining human-translated transcripts from NPTEL video lectures, which cover various technical subjects. They also fine-tuned existing translation models using this dataset, leading to improved performance compared to other available models.
Why it matters?
This research is important because it provides a valuable resource for translating complex scientific and technical content into Indian languages. By improving translation capabilities in these areas, Shiksha can help make educational materials more accessible to a wider audience, supporting students and professionals who may struggle with language barriers.
Abstract
Neural Machine Translation (NMT) models are typically trained on datasets with limited exposure to Scientific, Technical and Educational domains. Translation models thus, in general, struggle with tasks that involve scientific understanding or technical jargon. Their performance is found to be even worse for low-resource Indian languages. Finding a translation dataset that tends to these domains in particular, poses a difficult challenge. In this paper, we address this by creating a multilingual parallel corpus containing more than 2.8 million rows of English-to-Indic and Indic-to-Indic high-quality translation pairs across 8 Indian languages. We achieve this by bitext mining human-translated transcriptions of NPTEL video lectures. We also finetune and evaluate NMT models using this corpus and surpass all other publicly available models at in-domain tasks. We also demonstrate the potential for generalizing to out-of-domain translation tasks by improving the baseline by over 2 BLEU on average for these Indian languages on the Flores+ benchmark. We are pleased to release our model and dataset via this link: https://huggingface.co/SPRINGLab.