Slamming: Training a Speech Language Model on One GPU in a Day
Gallil Maimon, Avishai Elmakies, Yossi Adi
2025-02-25
Summary
This paper talks about a new method called Slam for training speech language models (SLMs) quickly and efficiently using just one regular computer graphics card in a single day, instead of the usual weeks or months on expensive supercomputers.
What's the problem?
Training good speech language models usually takes a lot of time and expensive equipment, which makes it hard for many researchers and small companies to work on improving these AI systems. This limits who can participate in developing better speech recognition technology.
What's the solution?
The researchers created Slam, a special recipe for training SLMs that uses clever tricks like starting with a smart initial setup, using both real and fake speech data, and fine-tuning how the model learns. They tested many different ways to make the training faster and better, and found a combination that works really well even with limited computer power.
Why it matters?
This matters because it makes speech AI research much more accessible to people without access to expensive computers. It could lead to more diverse voices contributing to speech technology development, potentially improving speech recognition for different languages and accents. The method also challenges what we thought was possible with limited resources, showing that we might be able to make even better speech AI systems more quickly and cheaply than we imagined.
Abstract
We introduce Slam, a recipe for training high-quality Speech Language Models (SLMs) on a single academic GPU in 24 hours. We do so through empirical analysis of model initialisation and architecture, synthetic training data, preference optimisation with synthetic data and tweaking all other components. We empirically demonstrate that this training recipe also scales well with more compute getting results on par with leading SLMs in a fraction of the compute cost. We hope these insights will make SLM training and research more accessible. In the context of SLM scaling laws, our results far outperform predicted compute optimal performance, giving an optimistic view to SLM feasibility. See code, data, models, samples at - https://pages.cs.huji.ac.il/adiyoss-lab/slamming .