Language Model Can Listen While Speaking
Ziyang Ma, Yakun Song, Chenpeng Du, Jian Cong, Zhuo Chen, Yuping Wang, Yuxuan Wang, Xie Chen
2024-08-06

Summary
This paper introduces a new model called the Listening-while-Speaking Language Model (LSLM), which allows AI systems to listen and speak at the same time. This makes conversations with computers more natural and interactive.
What's the problem?
Most current speech language models can only respond after a person finishes speaking, which can feel awkward and slow. This turn-based interaction limits how effectively people can communicate with AI, especially when they want to interrupt or ask questions in real-time.
What's the solution?
The authors developed LSLM, which uses a technique called full duplex modeling (FDM) to enable simultaneous listening and speaking. This model has two main parts: one for generating speech and another for understanding audio input. They tested three different ways to combine these functions—early fusion, middle fusion, and late fusion—and found that middle fusion worked best for balancing speech generation and real-time interaction. The model can handle interruptions and works well even in noisy environments.
Why it matters?
This research is significant because it enhances human-computer interaction by making conversations with AI more fluid and responsive. By allowing real-time interruptions and interactions, LSLM can improve applications like virtual assistants, customer service bots, and other voice-activated technologies, making them more user-friendly and effective.
Abstract
Dialogue serves as the most natural manner of human-computer interaction (HCI). Recent advancements in speech language models (SLM) have significantly enhanced speech-based conversational AI. However, these models are limited to turn-based conversation, lacking the ability to interact with humans in real-time spoken scenarios, for example, being interrupted when the generated content is not satisfactory. To address these limitations, we explore full duplex modeling (FDM) in interactive speech language models (iSLM), focusing on enhancing real-time interaction and, more explicitly, exploring the quintessential ability of interruption. We introduce a novel model design, namely listening-while-speaking language model (LSLM), an end-to-end system equipped with both listening and speaking channels. Our LSLM employs a token-based decoder-only TTS for speech generation and a streaming self-supervised learning (SSL) encoder for real-time audio input. LSLM fuses both channels for autoregressive generation and detects turn-taking in real time. Three fusion strategies -- early fusion, middle fusion, and late fusion -- are explored, with middle fusion achieving an optimal balance between speech generation and real-time interaction. Two experimental settings, command-based FDM and voice-based FDM, demonstrate LSLM's robustness to noise and sensitivity to diverse instructions. Our results highlight LSLM's capability to achieve duplex communication with minimal impact on existing systems. This study aims to advance the development of interactive speech dialogue systems, enhancing their applicability in real-world contexts.