< Explain other AI papers

S2S-Arena, Evaluating Speech2Speech Protocols on Instruction Following with Paralinguistic Information

Feng Jiang, Zhiyu Lin, Fan Bu, Yuhao Du, Benyou Wang, Haizhou Li

2025-03-10

S2S-Arena, Evaluating Speech2Speech Protocols on Instruction Following
  with Paralinguistic Information

Summary

This paper talks about S2S-Arena, a new way to test how well AI speech models can understand and use paralinguistic information, which includes things like tone, emotion, and emphasis in speech

What's the problem?

Current methods for testing speech-to-speech AI models don't consider paralinguistic information, which is important for natural-sounding speech. This means we don't know how well these models actually understand and use these important aspects of human speech

What's the solution?

The researchers created S2S-Arena, a test that includes 154 speech samples covering 21 different tasks in four areas of everyday life. These samples include both computer-generated and real human speech. They then used this test to evaluate popular speech models, comparing their performance in understanding and generating speech with paralinguistic information

Why it matters?

This matters because it helps us understand how well AI speech models can truly communicate like humans. By testing for paralinguistic information, we can develop better AI assistants that sound more natural and understand the subtleties of human speech. This could lead to more effective and user-friendly voice-based technologies in various fields like education, healthcare, and customer service

Abstract

The rapid development of large language models (LLMs) has brought significant attention to speech models, particularly recent progress in speech2speech protocols supporting speech input and output. However, the existing benchmarks adopt automatic text-based evaluators for evaluating the instruction following ability of these models lack consideration for paralinguistic information in both speech understanding and generation. To address these issues, we introduce S2S-Arena, a novel arena-style S2S benchmark that evaluates instruction-following capabilities with paralinguistic information in both speech-in and speech-out across real-world tasks. We design 154 samples that fused TTS and live recordings in four domains with 21 tasks and manually evaluate existing popular speech models in an arena-style manner. The experimental results show that: (1) in addition to the superior performance of GPT-4o, the speech model of cascaded ASR, LLM, and TTS outperforms the jointly trained model after text-speech alignment in speech2speech protocols; (2) considering paralinguistic information, the knowledgeability of the speech model mainly depends on the LLM backbone, and the multilingual support of that is limited by the speech module; (3) excellent speech models can already understand the paralinguistic information in speech input, but generating appropriate audio with paralinguistic information is still a challenge.