Whisper-LM: Improving ASR Models with Language Models for Low-Resource Languages
Xabier de Zuazo, Eva Navas, Ibon Saratxaga, Inma Hernáez Rioja
2025-04-04
Summary
This paper is about making speech recognition systems better at understanding languages that don't have a lot of data available.
What's the problem?
Speech recognition systems like Whisper don't always work well for languages with limited data.
What's the solution?
The researchers improved these systems by adding language models that help them better understand the nuances of different languages. The improvements were up to 51% for in-distribution datasets and up to 34% for out-of-distribution sentences.
Why it matters?
This work matters because it makes speech recognition technology more inclusive and effective for a wider range of languages.
Abstract
Automatic speech recognition systems have undoubtedly advanced with the integration of multilingual and multitask models such as Whisper, which have shown a promising ability to understand and process speech across a wide range of languages. Despite their robustness, these models often fall short in handling the linguistic distinctions of minority languages. This study addresses this gap by integrating traditional and novel language models with fine-tuned Whisper models to raise their performance in less commonly studied languages. Through rigorous fine-tuning and evaluation across multiple datasets, we demonstrate substantial improvements in word error rate, particularly in low-resource scenarios. Our approach not only does take advantage of the extensive data Whisper was pre-trained on, but also complements its linguistic adaptability by incorporating language models. We obtained improvements up to 51\% for in-distribution datasets and up to 34\% for out-of-distribution sentences using statistical language models, while large language models provided moderate but consistently robust improvement across diverse linguistic contexts. The findings reveal that, while the integration reliably benefits all model sizes, the extent of improvement varies, highlighting the importance of optimized language model parameters. Finally, we emphasize the importance of selecting appropriate evaluation parameters when reporting the results using transformer-based ASR models. In summary, this research clears the way for more inclusive ASR technologies that perform better across languages by enriching their linguistic knowledge. For further implementation details of this study, the technical documentation and source code are available at http://www.github.com/hitz-zentroa/whisper-lm.