T2AV-Compass: Towards Unified Evaluation for Text-to-Audio-Video Generation
Zhe Cao, Tao Wang, Jiaming Wang, Yanghai Wang, Yuanxing Zhang, Jialu Chen, Miao Deng, Jiahao Wang, Yubin Guo, Chenxi Liao, Yize Zhang, Zhaoxiang Zhang, Jiaheng Liu
2025-12-25
Summary
This paper introduces a new way to thoroughly test systems that create both video and audio from just text descriptions.
What's the problem?
Currently, evaluating how well these text-to-audio-video systems work is pretty messy and incomplete. Existing tests either focus on just the video *or* just the audio, or they don't really check if the video and audio actually make sense together and follow the instructions given in the text. It's hard to get a good overall picture of how realistic and accurate these systems are, especially when given complicated instructions.
What's the solution?
The researchers created a benchmark called T2AV-Compass. This benchmark includes 500 detailed and varied text prompts designed to be challenging. They also developed a two-part evaluation process: first, using computer algorithms to measure the technical quality of the video and audio, and how well they match up. Second, they used a powerful AI model to act as a judge, assessing whether the video and audio actually follow the instructions and seem realistic to a human. They then tested 11 different text-to-audio-video systems using this new benchmark.
Why it matters?
The results showed that even the best systems still have a lot of room for improvement, particularly in making the audio sound real and ensuring the video and audio are perfectly synchronized. T2AV-Compass provides a much better way to test and improve these systems, helping researchers push the boundaries of what's possible in creating audio and video from text.
Abstract
Text-to-Audio-Video (T2AV) generation aims to synthesize temporally coherent video and semantically synchronized audio from natural language, yet its evaluation remains fragmented, often relying on unimodal metrics or narrowly scoped benchmarks that fail to capture cross-modal alignment, instruction following, and perceptual realism under complex prompts. To address this limitation, we present T2AV-Compass, a unified benchmark for comprehensive evaluation of T2AV systems, consisting of 500 diverse and complex prompts constructed via a taxonomy-driven pipeline to ensure semantic richness and physical plausibility. Besides, T2AV-Compass introduces a dual-level evaluation framework that integrates objective signal-level metrics for video quality, audio quality, and cross-modal alignment with a subjective MLLM-as-a-Judge protocol for instruction following and realism assessment. Extensive evaluation of 11 representative T2AVsystems reveals that even the strongest models fall substantially short of human-level realism and cross-modal consistency, with persistent failures in audio realism, fine-grained synchronization, instruction following, etc. These results indicate significant improvement room for future models and highlight the value of T2AV-Compass as a challenging and diagnostic testbed for advancing text-to-audio-video generation.