HATS: An Open data set Integrating Human Perception Applied to the Evaluation of Automatic Speech Recognition Metrics

arXiv cs.CL / 5/1/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper argues that standard ASR evaluation—especially word error rate (WER)—is often too limited to fully reflect how well human users perceive transcription quality.
  • It introduces HATS, a newly released French, human-manually annotated dataset capturing human perception of transcription errors across outputs from multiple ASR systems.
  • In the dataset creation, 143 humans selected which transcription (from two ASR hypotheses) they preferred, enabling study of how human judgments align with different metric types.
  • The research analyzes relationships between human preferences and both lexical and embedding-based metrics (including methods like BERTScore and semantic distance), focusing on which metrics best correlate with perceived quality.
  • Overall, the work provides data and analysis to move ASR metric evaluation closer to human perception rather than purely transcript- or system-oriented scoring.

Abstract

Conventionally, Automatic Speech Recognition (ASR) systems are evaluated on their ability to correctly recognize each word contained in a speech signal. In this context, the word error rate (WER) metric is the reference for evaluating speech transcripts. Several studies have shown that this measure is too limited to correctly evaluate an ASR system, which has led to the proposal of other variants of metrics (weighted WER, BERTscore, semantic distance, etc.). However, they remain system-oriented, even when transcripts are intended for humans. In this paper, we firstly present Human Assessed Transcription Side-by-side (HATS), an original French manually annotated data set in terms of human perception of transcription errors produced by various ASR systems. 143 humans were asked to choose the best automatic transcription out of two hypotheses. We investigated the relationship between human preferences and various ASR evaluation metrics, including lexical and embedding-based ones, the latter being those that correlate supposedly the most with human perception.