Goodness-of-pronunciation without phoneme time alignment

arXiv cs.LG / 3/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a challenge in speech evaluation for low-resource languages, where ASR systems typically rely on phoneme timing/alignments that are hard to obtain reliably.
  • It proposes computing phoneme posteriors by mapping ASR hypotheses into a phoneme confusion network, enabling phoneme-related features even when the ASR model is frame-asynchronous and weakly supervised.
  • Instead of requiring phoneme-level time alignment, the method uses word-level speaking rate/duration features and combines phoneme and frame-level representations via a cross-attention architecture.
  • Experiments show performance comparable to standard frame-synchronous feature extraction on English and effective results on a low-resource Tamil dataset, supporting easier multilingual expansion of speech evaluation.
  • The work is aimed at compatibility between weakly supervised/open-source multilingual ASR models and downstream speech evaluation pipelines where phoneme alignment is otherwise a bottleneck.

Abstract

In speech evaluation, an Automatic Speech Recognition (ASR) model often computes time boundaries and phoneme posteriors for input features. However, limited data for ASR training hinders expansion of speech evaluation to low-resource languages. Open-source weakly-supervised models are capable of ASR over many languages, but they are frame-asynchronous and not phonemic, hindering feature extraction for speech evaluation. This paper proposes to overcome incompatibilities for feature extraction with weakly-supervised models, easing expansion of speech evaluation to low-resource languages. Phoneme posteriors are computed by mapping ASR hypotheses to a phoneme confusion network. Word instead of phoneme-level speaking rate and duration are used. Phoneme and frame-level features are combined using a cross-attention architecture, obviating phoneme time alignment. This performs comparably with standard frame-synchronous features on English speechocean762 and low-resource Tamil datasets.
広告