Layer Swapping for Zero-Shot Cross-Lingual Transfer in Large Language Models
Lucas Bandarkar, Benjamin Muller, Pritish Yuvraj, Rui Hou, Nayan Singhal, Hongjiang Lv, Bing Liu
2024-10-04

Summary
This paper discusses a method called Layer Swapping that improves how large language models (LLMs) can be adapted for tasks in different languages, particularly when there isn't enough data available in those languages.
What's the problem?
When trying to fine-tune LLMs for specific tasks in non-English languages, it can be difficult because there often isn't enough task-specific data. This makes it hard for the models to learn effectively and perform well, especially in areas like mathematical reasoning where data is scarce.
What's the solution?
To solve this problem, the authors propose a method where they first train separate 'experts' on math instruction data in English and on general instruction data in the target language. Then, they swap the top and bottom layers of the math expert with those from the language expert. This layer swapping allows the model to use the strengths of both experts, leading to better performance in math tasks in the target language. The results showed that this method improved performance by 10% compared to other merging methods.
Why it matters?
This research is important because it provides a simple and effective way to enhance the capabilities of LLMs across different languages without needing extensive additional training. By enabling better cross-lingual transfer of knowledge, this method can help create more versatile AI systems that can tackle a wider range of tasks in various languages.
Abstract
Model merging, such as model souping, is the practice of combining different models with the same architecture together without further training. In this work, we present a model merging methodology that addresses the difficulty of fine-tuning Large Language Models (LLMs) for target tasks in non-English languages, where task-specific data is often unavailable. We focus on mathematical reasoning and without in-language math data, facilitate cross-lingual transfer by composing language and math capabilities. Starting from the same pretrained model, we fine-tune separate "experts" on math instruction data in English and on generic instruction data in the target language. We then replace the top and bottom transformer layers of the math expert directly with layers from the language expert, which consequently enhances math performance in the target language. The resulting merged models outperform the individual experts and other merging methods on the math benchmark, MGSM, by 10% across four major languages where math instruction data is scarce. In addition, this layer swapping is simple, inexpensive, and intuitive, as it is based on an interpretative analysis of the most important parameter changes during the fine-tuning of each expert. The ability to successfully re-compose LLMs for cross-lingual transfer in this manner opens up future possibilities to combine model expertise, create modular solutions, and transfer reasoning capabilities across languages all post hoc.