< Explain other AI papers

Ichigo: Mixed-Modal Early-Fusion Realtime Voice Assistant

Alan Dao, Dinh Bach Vu, Huy Hoang Ha

2024-10-22

Ichigo: Mixed-Modal Early-Fusion Realtime Voice Assistant

Summary

This paper introduces Ichigo, a mixed-modal real-time voice assistant that effectively combines speech and text processing to improve interactions with users.

What's the problem?

Large Language Models (LLMs) have made great strides in understanding and generating text, but they struggle with tasks that involve both speech and text. Integrating these two forms of communication can be complicated, which limits the effectiveness of voice assistants. Many existing systems process speech and text separately, leading to delays and less natural conversations.

What's the solution?

To solve this problem, the authors developed Ichigo, which uses a method called early-fusion to process speech and text together. This means that Ichigo converts spoken words into discrete tokens (like breaking down sentences into smaller parts) and uses a single model architecture for both types of input. This allows Ichigo to understand and generate responses more quickly and naturally. The authors also created a comprehensive training approach that includes pre-training on multilingual speech datasets and fine-tuning on specific instruction datasets. Ichigo has shown impressive performance in answering questions based on speech input, achieving fast response times.

Why it matters?

This research is significant because it advances the capabilities of voice assistants, making them more efficient and capable of handling complex interactions. By improving how AI processes mixed types of data, Ichigo can lead to better user experiences in applications like customer service, education, and personal assistance, where clear communication is essential.

Abstract

Large Language Models (LLMs) have revolutionized natural language processing, but their application to speech-based tasks remains challenging due to the complexities of integrating audio and text modalities. This paper introduces Ichigo, a mixed-modal model that seamlessly processes interleaved sequences of speech and text. Utilizing a tokenized early-fusion approach, Ichigo quantizes speech into discrete tokens and employs a uniform transformer-based architecture for both speech and text modalities. This method enables joint reasoning and generation across modalities without the need for separate adapters. We present a comprehensive training methodology, including pre-training on multilingual speech recognition datasets and fine-tuning on a curated instruction dataset. Ichigo demonstrates state-of-the-art performance on speech question-answering benchmarks, outperforming existing open-source speech language models and achieving comparable results to cascaded systems. Notably, Ichigo exhibits a latency of just 111 ms to first token generation, significantly lower than current models. Our approach not only advances the field of multimodal AI but also provides a framework for smaller research teams to contribute effectively to open-source speech-language models.