< Explain other AI papers

Scaling Analysis of Interleaved Speech-Text Language Models

Gallil Maimon, Michael Hassid, Amit Roth, Yossi Adi

2025-04-04

Scaling Analysis of Interleaved Speech-Text Language Models

Summary

This paper talks about figuring out whether AI models that understand both speech and text can be trained efficiently by building on existing text-based models and mixing speech and text data together. It challenges the idea that speech-focused models need huge amounts of computing power and data to work well.

What's the problem?

Training AI models to handle speech often requires way more computing power and data than text-based models, making people think it's too hard to create high-quality speech models. But this doesn't account for models that start with existing text AI and mix speech and text during training.

What's the solution?

The researchers tested many models that combine speech and text data, starting from existing text models. They found that these mixed models use computing power more efficiently and need less data. They also discovered that focusing on making the model bigger, rather than training it longer, works better for speech-text models.

Why it matters?

This matters because it shows we can build smarter speech models faster and cheaper by reusing text models and mixing speech data. It helps create AI that understands both talking and writing better, which is useful for things like voice assistants or automatic transcription tools.

Abstract

Existing Speech Language Model (SLM) scaling analysis paints a bleak picture. They predict that SLMs require much more compute and data compared to text, leading some to question the feasibility of training high-quality SLMs. However, modern SLMs are often initialised from pre-trained TextLMs using speech-text interleaving to allow knowledge transfer. This raises the question - Do interleaved SLMs scale more efficiently than textless-SLMs? In this paper we answer a resounding, yes! We conduct scaling analysis of interleaved SLMs by training several dozen and analysing the scaling trends. We see that under this setup SLMs scale more efficiently with compute. Additionally, our results indicate that the scaling-dynamics are significantly different than textless-SLMs, suggesting one should allocate notably more of the compute budget for increasing model size over training tokens. We also study the role of synthetic data and TextLM model families in unlocking this potential. Results suggest, that our scaled up model achieves comparable performance with leading models on speech semantic metrics while using less compute and data than other approaches. We open source models, samples, and data - https://pages.cs.huji.ac.il/adiyoss-lab/sims.