AHELM: A Holistic Evaluation of Audio-Language Models
Tony Lee, Haoqin Tu, Chi Heem Wong, Zijun Wang, Siwei Yang, Yifan Mai, Yuyin Zhou, Cihang Xie, Percy Liang
2025-09-01
Summary
This paper introduces AHELM, a new way to test audio-language models, which are AI systems that understand both sound and text. It's designed to give a complete picture of how well these models work, going beyond simple tasks to look at things like fairness and safety.
What's the problem?
Currently, testing these audio-language models is a mess. There aren't consistent standards for what to test or how to test it, making it hard to compare different models. Existing tests usually focus on just a few skills and often ignore important issues like whether the model is biased or safe to use. Plus, each test uses different methods, so it's like comparing apples and oranges.
What's the solution?
The researchers created AHELM, a comprehensive benchmark that combines many different datasets, including two new ones called PARADE and CoRe-Bench. PARADE checks if the model avoids stereotypes, while CoRe-Bench tests its ability to understand conversations. AHELM evaluates models on ten key areas – understanding audio, general knowledge, reasoning, emotion, bias, fairness, handling multiple languages, reliability, avoiding harmful outputs, and overall safety. They also made sure everyone uses the same prompts and settings when testing, so the results are truly comparable. They tested 14 different models, including some of the most advanced ones.
Why it matters?
This work is important because it provides a standardized and thorough way to evaluate audio-language models. This will help developers build better, fairer, and safer AI systems. The findings, like how one top model showed unfairness in speech recognition, highlight areas where improvement is needed. By making the testing process transparent and open to updates, AHELM will continue to be a valuable tool for the AI community.
Abstract
Evaluations of audio-language models (ALMs) -- multimodal models that take interleaved audio and text as input and output text -- are hindered by the lack of standardized benchmarks; most benchmarks measure only one or two capabilities and omit evaluative aspects such as fairness or safety. Furthermore, comparison across models is difficult as separate evaluations test a limited number of models and use different prompting methods and inference parameters. To address these shortfalls, we introduce AHELM, a benchmark that aggregates various datasets -- including 2 new synthetic audio-text datasets called PARADE, which evaluates the ALMs on avoiding stereotypes, and CoRe-Bench, which measures reasoning over conversational audio through inferential multi-turn question answering -- to holistically measure the performance of ALMs across 10 aspects we have identified as important to the development and usage of ALMs: audio perception, knowledge, reasoning, emotion detection, bias, fairness, multilinguality, robustness, toxicity, and safety. We also standardize the prompts, inference parameters, and evaluation metrics to ensure equitable comparisons across models. We test 14 open-weight and closed-API ALMs from 3 developers and 3 additional simple baseline systems each consisting of an automatic speech recognizer and a language model. Our results show that while Gemini 2.5 Pro ranks top in 5 out of 10 aspects, it exhibits group unfairness (p=0.01) on ASR tasks whereas most of the other models do not. We also find that the baseline systems perform reasonably well on AHELM, with one ranking 5th overall despite having only speech-to-text capabilities. For transparency, all raw prompts, model generations, and outputs are available on our website at https://crfm.stanford.edu/helm/audio/v1.0.0. AHELM is intended to be a living benchmark and new datasets and models will be added over time.