Investigating Decoder-only Large Language Models for Speech-to-text Translation
Chao-Wei Huang, Hui Lu, Hongyu Gong, Hirofumi Inaguma, Ilia Kulikov, Ruslan Mavlyutov, Sravya Popuri
2024-07-04

Summary
This paper talks about DisCo-Diff, a new method that improves how diffusion models generate data by using a combination of continuous and discrete variables, making the learning process easier and more effective.
What's the problem?
The main problem is that traditional diffusion models, which are used to create new data from existing data, often struggle when trying to represent complex information. They typically use a single continuous distribution (like a smooth curve) to encode all types of data, which can be very challenging and inefficient, especially when dealing with complicated or varied data.
What's the solution?
To solve this issue, the authors introduced DisCo-Diff, which adds discrete latent variables (think of these as distinct categories or 'buckets' for organizing information) alongside the continuous ones. This combination allows the model to better capture different aspects of the data while simplifying the learning process. The discrete variables help reduce the complexity of mapping noise to actual data, making it easier for the model to learn and generate high-quality outputs. The authors tested DisCo-Diff on various tasks and found that it consistently performed better than traditional methods.
Why it matters?
This research is important because it enhances the capabilities of generative models, which are used in many applications like image synthesis and data generation. By making it easier for these models to learn from complex data, DisCo-Diff can lead to better quality outputs and more efficient processes in fields such as artificial intelligence and machine learning.
Abstract
Large language models (LLMs), known for their exceptional reasoning capabilities, generalizability, and fluency across diverse domains, present a promising avenue for enhancing speech-related tasks. In this paper, we focus on integrating decoder-only LLMs to the task of speech-to-text translation (S2TT). We propose a decoder-only architecture that enables the LLM to directly consume the encoded speech representation and generate the text translation. Additionally, we investigate the effects of different parameter-efficient fine-tuning techniques and task formulation. Our model achieves state-of-the-art performance on CoVoST 2 and FLEURS among models trained without proprietary data. We also conduct analyses to validate the design choices of our proposed model and bring insights to the integration of LLMs to S2TT.