Crosslingual Reasoning through Test-Time Scaling
Zheng-Xin Yong, M. Farid Adilazuarda, Jonibek Mansurov, Ruochen Zhang, Niklas Muennighoff, Carsten Eickhoff, Genta Indra Winata, Julia Kreutzer, Stephen H. Bach, Alham Fikri Aji
2025-05-09
Summary
This paper talks about improving how AI models can solve math problems in different languages by making them bigger and adjusting how they think through problems, especially when they were mostly trained in English.
What's the problem?
The problem is that most powerful AI models are trained mainly in English, so they struggle when asked to reason or solve problems in other languages, especially in languages with less data or in topics they haven't seen before.
What's the solution?
The researchers made the models larger and fine-tuned the way the models generate their step-by-step reasoning, which helped them do better at solving math problems in multiple languages. However, there are still challenges when the language doesn't have much data or when the problems are very different from what the model has seen.
Why it matters?
This matters because as AI becomes more global, it needs to work well in many languages, not just English. Improving crosslingual reasoning helps make AI tools more useful and fair for people all over the world, even in languages that aren't widely spoken.
Abstract
Scaling English-centric reasoning models and controlling chain-of-thought language improves multilingual mathematical reasoning but has limitations in low-resource languages and out-of-domain contexts.