SHANKS: Simultaneous Hearing and Thinking for Spoken Language Models
Cheng-Han Chiang, Xiaofei Wang, Linjie Li, Chung-Ching Lin, Kevin Lin, Shujie Liu, Zhendong Wang, Zhengyuan Yang, Hung-yi Lee, Lijuan Wang
2025-10-09
Summary
This paper introduces SHANKS, a new way for AI models that handle speech to think and respond more like humans during a conversation.
What's the problem?
Current AI models wait until you *finish* speaking before they start to process what you said and formulate a response. This delay, or latency, makes real-time conversations feel unnatural and slow, especially when dealing with spoken language. It's like trying to have a back-and-forth discussion with someone who needs a long pause after every sentence to figure out what to say next.
What's the solution?
The researchers noticed that people naturally start thinking *while* they're listening. So, they created SHANKS, a system that allows the AI to generate its own internal reasoning – a 'chain of thought' – as it receives speech in small pieces. As soon as a bit of speech comes in, SHANKS starts reasoning about it, and can even decide to interrupt you if it thinks you're making a mistake or needs help. It can also start using tools to help solve a problem before you even finish explaining it.
Why it matters?
SHANKS is a step towards creating AI assistants that feel more responsive and natural to interact with. By 'thinking while listening,' the AI can provide help and complete tasks much faster, making conversations flow more smoothly and efficiently. This is especially important for voice-based interactions where quick responses are key.
Abstract
Current large language models (LLMs) and spoken language models (SLMs) begin thinking and taking actions only after the user has finished their turn. This prevents the model from interacting during the user's turn and can lead to high response latency while it waits to think. Consequently, thinking after receiving the full input is not suitable for speech-to-speech interaction, where real-time, low-latency exchange is important. We address this by noting that humans naturally "think while listening." In this paper, we propose SHANKS, a general inference framework that enables SLMs to generate unspoken chain-of-thought reasoning while listening to the user input. SHANKS streams the input speech in fixed-duration chunks and, as soon as a chunk is received, generates unspoken reasoning based on all previous speech and reasoning, while the user continues speaking. SHANKS uses this unspoken reasoning to decide whether to interrupt the user and to make tool calls to complete the task. We demonstrate that SHANKS enhances real-time user-SLM interaction in two scenarios: (1) when the user is presenting a step-by-step solution to a math problem, SHANKS can listen, reason, and interrupt when the user makes a mistake, achieving 37.1% higher interruption accuracy than a baseline that interrupts without thinking; and (2) in a tool-augmented dialogue, SHANKS can complete 56.9% of the tool calls before the user finishes their turn. Overall, SHANKS moves toward models that keep thinking throughout the conversation, not only after a turn ends. Animated illustrations of Shanks can be found at https://d223302.github.io/SHANKS/