Qualitative Evaluation of Language Model Rescoring in Automatic Speech Recognition

arXiv cs.CL / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that evaluating automatic speech recognition (ASR) systems solely with word error rate (WER) is insufficient for understanding transcription errors in depth.
  • It proposes studying the effect of language-model rescoring in ASR by adding NLP-style metrics beyond WER.
  • It introduces two targeted measures: POSER to highlight morpho-syntactic (grammatical) error patterns and EmbER to weight errors by the semantic distance between incorrect and intended words.
  • The approach aims to reveal how language models contribute linguistically when applied in a posterior rescoring step over ASR hypotheses.

Abstract

Evaluating automatic speech recognition (ASR) systems is a classical but difficult and still open problem, which often boils down to focusing only on the word error rate (WER). However, this metric suffers from many limitations and does not allow an in-depth analysis of automatic transcription errors. In this paper, we propose to study and understand the impact of rescoring using language models in ASR systems by means of several metrics often used in other natural language processing (NLP) tasks in addition to the WER. In particular, we introduce two measures related to morpho-syntactic and semantic aspects of transcribed words: 1) the POSER (Part-of-speech Error Rate), which should highlight the grammatical aspects, and 2) the EmbER (Embedding Error Rate), a measurement that modifies the WER by providing a weighting according to the semantic distance of the wrongly transcribed words. These metrics illustrate the linguistic contributions of the language models that are applied during a posterior rescoring step on transcription hypotheses.