SpeechParaling-Bench: A Comprehensive Benchmark for Paralinguistic-Aware Speech Generation

arXiv cs.CL / 4/23/2026

📰 NewsModels & Research

Key Points

  • The paper introduces SpeechParaling-Bench, a new benchmark to evaluate paralinguistic-aware speech generation in large audio-language models (LALMs), addressing gaps in feature coverage and evaluation subjectivity.
  • It expands paralinguistic feature coverage from under 50 to over 100 fine-grained features and provides 1,000+ English–Chinese parallel speech queries organized into three escalating tasks (fine-grained control, intra-utterance variation, and context-aware adaptation).
  • For more reliable assessment, the authors build a pairwise comparison pipeline where an LALM-based judge compares candidate responses against a fixed baseline, using relative preference to reduce subjectivity and human annotation cost.
  • Experiments show that current LALMs have major weaknesses: even strong proprietary models struggle with comprehensive static control and dynamic modulation of paralinguistic features, and misinterpreting paralinguistic cues explains 43.3% of errors in situational dialogue.
  • The results highlight the need for more robust paralinguistic modeling to build voice assistants that better align with human communication behavior.

Abstract

Paralinguistic cues are essential for natural human-computer interaction, yet their evaluation in Large Audio-Language Models (LALMs) remains limited by coarse feature coverage and the inherent subjectivity of assessment. To address these challenges, we introduce SpeechParaling-Bench, a comprehensive benchmark for paralinguistic-aware speech generation. It expands existing coverage from fewer than 50 to over 100 fine-grained features, supported by more than 1,000 English-Chinese parallel speech queries, and is organized into three progressively challenging tasks: fine-grained control, intra-utterance variation, and context-aware adaptation. To enable reliable evaluation, we further develop a pairwise comparison pipeline, in which candidate responses are evaluated against a fixed baseline by an LALM-based judge. By framing evaluation as relative preference rather than absolute scoring, this approach mitigates subjectivity and yields more stable and scalable assessments without costly human annotation. Extensive experiments reveal substantial limitations in current LALMs. Even leading proprietary models struggle with comprehensive static control and dynamic modulation of paralinguistic features, while failure to correctly interpret paralinguistic cues accounts for 43.3% of errors in situational dialogue. These findings underscore the need for more robust paralinguistic modeling toward human-aligned voice assistants.