AI Navigate

Resurfacing Paralinguistic Awareness in Large Audio Language Models

arXiv cs.CL / 3/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Large Audio Language Models typically neglect paralinguistic cues due to a content-centered paradigm, which this work aims to address.
  • The authors introduce five diverse layer-wise analyses to jointly identify paralinguistic layers and semantic understanding layers within LALMs.
  • They propose a paralinguistic-enhanced fine-tuning (PE-FT) protocol, including selective-layer fine-tuning and an auxiliary dual-level classification head.
  • Experiments demonstrate that PE-FT efficiently resurfaces paralinguistic awareness and can surpass the performance of all-layer fine-tuning.
  • The findings suggest potential improvements in human-LALM interaction by leveraging paralinguistic cues to enrich model understanding and responses.

Abstract

Large Audio Language Models (LALMs) have expanded the interaction with human to speech modality, which introduces great interactive potential, due to the paralinguistic cues implicitly indicating the user context. However, building on the current content-centred paradigm, LALMs usually neglect such paralinguistic cues and respond solely based on query content. In this work, to resurface the paralinguistic awareness in LALMs, we introduce five diverse layer-wise analyses to jointly identify paralinguistic layers and semantic understanding layers. Based on these insights, we propose a paralinguistic-enhanced fine-tuning (PE-FT) protocol accordingly to equip LALMs with paralinguistic-aware capabilities, including (1) selective-layer fine-tuning, and (2) an auxiliary dual-level classification head. Our experiments demonstrate that PE-FT protocol efficiently and effectively resurfaces the paralinguistic awareness, even surpassing the performance of the all-layer fine-tuning strategy.