"This Wasn't Made for Me": Recentering User Experience and Emotional Impact in the Evaluation of ASR Bias
arXiv cs.CL / 4/24/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- Research on ASR bias has often centered on error rates for underrepresented dialects, but this study examines the human and emotional consequences of those system failures.
- User experience studies conducted in four U.S. locations show many participants feel ASR does not account for their cultural backgrounds and that they must continually adjust to use it.
- Although participants report frustration and annoyance—and sometimes a sense of personal inadequacy—they still hold high expectations for ASR and are willing to help improve models.
- Qualitative findings highlight “invisible labor” such as code-switching, hyper-articulation, and emotional management, and argue that fairness evaluations based on accuracy alone miss key harms like emotional labor, cognitive burden, and psychological toll.
Related Articles

The 67th Attempt: When Your "Knowledge Management" System Becomes a Self-Fulfilling Prophecy of Excellence
Dev.to

Context Engineering for Developers: A Practical Guide (2026)
Dev.to

GPT-5.5 is here. So is DeepSeek V4. And honestly, I am tired of version numbers.
Dev.to

I Built an AI Image Workflow with GPT Image 2.0 (+ Fixing Its Biggest Flaw)
Dev.to
Max-and-Omnis/Nemotron-3-Super-64B-A12B-Math-REAP-GGUF
Reddit r/LocalLLaMA