AI Navigate

Calibration-Reasoning Framework for Descriptive Speech Quality Assessment

arXiv cs.CL / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents a calibration stage that tunes an audio foundation model to predict predefined perceptual dimensions for descriptive speech quality assessment.
  • It introduces a reinforcement learning stage using Group Relative Policy Optimization (GRPO) with dimension-specific rewards to improve the accuracy of descriptions and the temporal localization of quality issues.
  • The approach achieves state-of-the-art results, including 0.71 mean PCC on QualiSpeech and a 13% MOS prediction improvement driven by RL-based reasoning.
  • The method enables finer-grained detection and time-localization of audio artifacts, advancing explainable speech quality assessment.
  • This work demonstrates how calibration and RL-based reasoning can adapt large-language-models for audio-quality analysis.

Abstract

Explainable speech quality assessment requires moving beyond Mean Opinion Scores (MOS) to analyze underlying perceptual dimensions. To address this, we introduce a novel post-training method that tailors the foundational Audio Large Language Model for multidimensional reasoning, detection and classification of audio artifacts. First, a calibration stage aligns the model to predict predefined perceptual dimensions. Second, a reinforcement learning stage leverages Group Relative Policy Optimization (GRPO) with dimension-specific rewards to heavily enhance accuracy of descriptions and temporal localization of quality issues. With this approach we reach state-of-the-art results of 0.71 mean PCC score on the multidimensional QualiSpeech benchmark and 13% improvement in MOS prediction driven by RL-based reasoning. Furthermore, our fine-grained GRPO rewards substantially advance the model's ability to pinpoint and classify audio artifacts in time.