Beyond Transcription: Unified Audio Schema for Perception-Aware AudioLLMs

arXiv cs.CL / 4/15/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that many AudioLLMs underperform on fine-grained acoustic perception because ASR-centric training encourages suppression of paralinguistic and non-linguistic acoustic cues as “noise.”
  • It introduces the Unified Audio Schema (UAS), a structured supervision framework that decomposes audio supervision into Transcription, Paralinguistics, and Non-linguistic Events using a unified JSON format.
  • The approach is designed to improve acoustic coverage while maintaining the audio-text alignment needed for strong reasoning in AudioLLMs.
  • Experiments on discrete and continuous AudioLLM architectures show consistent gains, including a 10.9% improvement in fine-grained perception on MMSU compared with same-size state-of-the-art baselines.
  • The authors report that reasoning capabilities remain robust and provide public code/models via the linked GitHub repository.

Abstract

Recent Audio Large Language Models (AudioLLMs) exhibit a striking performance inversion: while excelling at complex reasoning tasks, they consistently underperform on fine-grained acoustic perception. We attribute this gap to a fundamental limitation of ASR-centric training, which provides precise linguistic targets but implicitly teaches models to suppress paralinguistic cues and acoustic events as noise. To address this, we propose Unified Audio Schema (UAS), a holistic and structured supervision framework that organizes audio information into three explicit components -- Transcription, Paralinguistics, and Non-linguistic Events -- within a unified JSON format. This design achieves comprehensive acoustic coverage without sacrificing the tight audio-text alignment that enables reasoning. We validate the effectiveness of this supervision strategy by applying it to both discrete and continuous AudioLLM architectures. Extensive experiments on MMSU, MMAR, and MMAU demonstrate that UAS-Audio yields consistent improvements, boosting fine-grained perception by 10.9% on MMSU over the same-size state-of-the-art models while preserving robust reasoning capabilities. Our code and model are publicly available at https://github.com/Tencent/Unified_Audio_Schema.