< Explain other AI papers

MMAU-Pro: A Challenging and Comprehensive Benchmark for Holistic Evaluation of Audio General Intelligence

Sonal Kumar, Šimon Sedláček, Vaibhavi Lokegaonkar, Fernando López, Wenyi Yu, Nishit Anand, Hyeonggon Ryu, Lichang Chen, Maxim Plička, Miroslav Hlaváček, William Fineas Ellingwood, Sathvik Udupa, Siyuan Hou, Allison Ferner, Sara Barahona, Cecilia Bolaños, Satish Rahi, Laura Herrera-Alarcón, Satvik Dixit, Siddhi Patil, Soham Deshmukh, Lasha Koroshinadze

2025-08-20

MMAU-Pro: A Challenging and Comprehensive Benchmark for Holistic Evaluation of Audio General Intelligence

Summary

This paper introduces MMAU-Pro, a new, comprehensive set of tests to measure how well AI can understand sounds and speech, including complex scenarios like music and sounds from real-world environments. It also tests how well AI can reason about sounds in space and understand multiple sounds at once. They tested many AI models and found that even the best ones struggle, often performing as poorly as random guessing on certain tasks. This research aims to help AI developers build smarter AI that can truly 'hear' and understand the world like humans do.

What's the problem?

It's currently difficult to properly test how well AI systems understand all types of audio, like spoken words, everyday sounds, and music. Existing tests don't cover a wide enough range of skills or complex audio situations, making it hard to know how good AI really is at 'listening'. We need a better way to evaluate this crucial ability for AI to become truly intelligent.

What's the solution?

The researchers created MMAU-Pro, which is a large collection of audio examples with carefully designed questions and answers. This benchmark covers 49 different audio understanding skills, including understanding long audio clips, sounds coming from different directions, and dealing with multiple sounds happening together. The questions require AI to think step-by-step and can be multiple-choice or open-ended. The audio used is taken from real-life situations, not just artificial examples, to make the testing more realistic. They then used MMAU-Pro to test 22 different AI models, highlighting their weaknesses.

Why it matters?

Understanding audio is a fundamental part of human intelligence, and for AI to reach human-level intelligence, it needs to be able to comprehend audio just as well. This benchmark, MMAU-Pro, is important because it provides a much more thorough and realistic way to measure AI's auditory skills. By identifying specific areas where current AI models fall short, this work gives developers clear goals and directions for improving AI so it can better understand and interact with the sound-filled world around us.

Abstract

Audio comprehension-including speech, non-speech sounds, and music-is essential for achieving human-level intelligence. Consequently, AI agents must demonstrate holistic audio understanding to qualify as generally intelligent. However, evaluating auditory intelligence comprehensively remains challenging. To address this gap, we introduce MMAU-Pro, the most comprehensive and rigorously curated benchmark for assessing audio intelligence in AI systems. MMAU-Pro contains 5,305 instances, where each instance has one or more audios paired with human expert-generated question-answer pairs, spanning speech, sound, music, and their combinations. Unlike existing benchmarks, MMAU-Pro evaluates auditory intelligence across 49 unique skills and multiple complex dimensions, including long-form audio comprehension, spatial audio reasoning, multi-audio understanding, among others. All questions are meticulously designed to require deliberate multi-hop reasoning, including both multiple-choice and open-ended response formats. Importantly, audio data is sourced directly ``from the wild" rather than from existing datasets with known distributions. We evaluate 22 leading open-source and proprietary multimodal AI models, revealing significant limitations: even state-of-the-art models such as Gemini 2.5 Flash and Audio Flamingo 3 achieve only 59.2% and 51.7% accuracy, respectively, approaching random performance in multiple categories. Our extensive analysis highlights specific shortcomings and provides novel insights, offering actionable perspectives for the community to enhance future AI systems' progression toward audio general intelligence. The benchmark and code is available at https://sonalkum.github.io/mmau-pro.