Identifying and typifying demographic unfairness in phoneme-level embeddings of self-supervised speech recognition models

arXiv cs.CL / 4/27/2026

📰 NewsModels & Research

Key Points

  • The paper argues that progress in fairer ASR requires a more detailed characterization of phoneme-level encoder errors, especially how embeddings differ across speaker groups (SGs) with different performance.
  • It proposes a framework that distinguishes two phoneme-embedding error types: random/high-variance embedding errors versus systematic embedding bias.
  • The authors find that training phoneme classification probes on a single (often disadvantaged) SG can improve that SG’s performance, indicating SG-level bias in phoneme embeddings.
  • They also show that worse phoneme prediction accuracy correlates with higher phoneme variance, suggesting random error is a key contributor to unfairness.
  • Finally, they report that fairness-oriented fine-tuning using domain enhancing and adversarial training does not reduce random embedding error nor change the observed probe-training benefits.

Abstract

Modern automatic speech recognition (ASR) systems have been observed to function better for certain speaker groups (SGs) than others, despite recent gains in overall performance. One potential impediment to progress towards fairer ASR is a more nuanced understanding of the types of modeling errors that speech encoder models make, and in particular the difference between the structure of embeddings for high-performance and low-performance SGs. This paper proposes a framework typifying two types of error that can occur in modeling phonemes in ASR systems: random error/high variance in phoneme embedding, vs systematic error/embedding bias. We find that training phoneme classification probes only on a single, typically disadvantaged SG, sometimes improves performance for that SG, which is evidence for the existence of SG-level bias in phoneme embeddings. On the other hand, we find that speakers and SGs with higher levels of phoneme variance are the same as those with worse phoneme prediction accuracy. We conclude that both types of error are present in phoneme embeddings and both are candidate causes for SG-level unfairness in ASR, though random error is likely a greater hindrance to fairness than systematic error. Furthermore, we find that finetuning encoder models using a fairness-enhancing algorithm (domain enhancing and adversarial training) changes neither the benefits of in-domain phoneme classification probe training, nor measured levels of random embedding error.