Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming
Zhifei Xie, Changqiao Wu
2024-09-03

Summary
This paper talks about Mini-Omni, a new model that allows language models to understand and generate speech in real-time, making conversations with computers more natural.
What's the problem?
Current language models can have conversations, but they often rely on separate systems to convert text to speech, which can slow things down and make interactions feel less fluid. This lag can be frustrating for users who want a seamless experience when talking to AI.
What's the solution?
Mini-Omni is an end-to-end model that combines understanding audio input and generating speech output without needing additional systems. It uses a method called text-instructed speech generation and batch-parallel strategies to improve performance. The model retains its language abilities while enabling real-time interaction, which is referred to as 'Any Model Can Talk.' Additionally, the authors created a new dataset called VoiceAssistant-400K to help fine-tune the model for better speech output.
Why it matters?
This research is important because it represents a significant step forward in human-computer interaction. By allowing AI to communicate more naturally and efficiently, Mini-Omni can enhance applications like virtual assistants, customer service bots, and educational tools, making technology more accessible and user-friendly.
Abstract
Recent advances in language models have achieved significant progress. GPT-4o, as a new milestone, has enabled real-time conversations with humans, demonstrating near-human natural fluency. Such human-computer interaction necessitates models with the capability to perform reasoning directly with the audio modality and generate output in streaming. However, this remains beyond the reach of current academic models, as they typically depend on extra TTS systems for speech synthesis, resulting in undesirable latency. This paper introduces the Mini-Omni, an audio-based end-to-end conversational model, capable of real-time speech interaction. To achieve this capability, we propose a text-instructed speech generation method, along with batch-parallel strategies during inference to further boost the performance. Our method also helps to retain the original model's language capabilities with minimal degradation, enabling other works to establish real-time interaction capabilities. We call this training method "Any Model Can Talk". We also introduce the VoiceAssistant-400K dataset to fine-tune models optimized for speech output. To our best knowledge, Mini-Omni is the first fully end-to-end, open-source model for real-time speech interaction, offering valuable potential for future research.