AI Navigate

DEAF: A Benchmark for Diagnostic Evaluation of Acoustic Faithfulness in Audio Language Models

arXiv cs.AI / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces DEAF, a benchmark for diagnostic evaluation of acoustic faithfulness in Audio MLLMs, featuring over 2,700 conflict stimuli across emotional prosody, background sounds, and speaker identity.
  • It presents a controlled multi-level evaluation framework that progressively increases textual influence to separate content-driven bias from prompt-induced sycophancy.
  • It defines diagnostic metrics to quantify model reliance on textual cues versus acoustic signals.
  • Evaluations of seven Audio MLLMs show a pattern of text dominance: models are sensitive to acoustic variations but predictions are mainly driven by textual inputs, signaling a gap between benchmark performance and true acoustic understanding.

Abstract

Recent Audio Multimodal Large Language Models (Audio MLLMs) demonstrate impressive performance on speech benchmarks, yet it remains unclear whether these models genuinely process acoustic signals or rely on text-based semantic inference. To systematically study this question, we introduce DEAF (Diagnostic Evaluation of Acoustic Faithfulness), a benchmark of over 2,700 conflict stimuli spanning three acoustic dimensions: emotional prosody, background sounds, and speaker identity. Then, we design a controlled multi-level evaluation framework that progressively increases textual influence, ranging from semantic conflicts in the content to misleading prompts and their combination, allowing us to disentangle content-driven bias from prompt-induced sycophancy. We further introduce diagnostic metrics to quantify model reliance on textual cues over acoustic signals. Our evaluation of seven Audio MLLMs reveals a consistent pattern of text dominance: models are sensitive to acoustic variations, yet predictions are predominantly driven by textual inputs, revealing a gap between high performance on standard speech benchmarks and genuine acoustic understanding.