< Explain other AI papers

POWSM: A Phonetic Open Whisper-Style Speech Foundation Model

Chin-Jou Li, Kalvin Chang, Shikhar Bharadwaj, Eunjung Yeo, Kwanghee Choi, Jian Zhu, David Mortensen, Shinji Watanabe

2025-10-31

POWSM: A Phonetic Open Whisper-Style Speech Foundation Model

Summary

This paper introduces a new model called POWSM that can handle multiple speech-related tasks at once, like converting speech to text, text to phonetic sounds, and phonetic sounds back to speech.

What's the problem?

Traditionally, different speech tasks like automatic speech recognition, figuring out the sounds in speech, and converting writing into sounds have been treated as separate problems, each needing its own specialized tools and data. This is inefficient and doesn't allow for sharing knowledge between these related areas.

What's the solution?

The researchers created POWSM, a single model that can perform all these phonetic tasks together. It's designed to seamlessly move between audio, written text, and the individual sounds that make up speech. They showed that POWSM works as well as, or even better than, models specifically designed for just one of these tasks.

Why it matters?

This is a big step towards creating more versatile and efficient speech processing systems. A unified model like POWSM is especially helpful when working with languages that don't have a lot of existing data, because it can learn from multiple tasks at once. The researchers also made their model and data publicly available, encouraging further research in this area.

Abstract

Recent advances in spoken language processing have led to substantial progress in phonetic tasks such as automatic speech recognition (ASR), phone recognition (PR), grapheme-to-phoneme conversion (G2P), and phoneme-to-grapheme conversion (P2G). Despite their conceptual similarity, these tasks have largely been studied in isolation, each relying on task-specific architectures and datasets. In this paper, we introduce POWSM (Phonetic Open Whisper-style Speech Model), the first unified framework capable of jointly performing multiple phone-related tasks. POWSM enables seamless conversion between audio, text (graphemes), and phones, opening up new possibilities for universal and low-resource speech processing. Our model outperforms or matches specialized PR models of similar size (Wav2Vec2Phoneme and ZIPA) while jointly supporting G2P, P2G, and ASR. Our training data, code and models are released to foster open science.