AI Navigate

The Hrunting of AI: Where and How to Improve English Dialectal Fairness

arXiv cs.CL / 3/17/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper shows that improving LLM performance on English dialects is hampered by data scarcity and by how human-model agreement affects evaluation results.
  • It evaluates four dialect groups (Yorkshire, Geordie, Cornish, and African-American Vernacular English) with West Frisian as a control to study data quality and availability effects.
  • The study finds that LLM-human agreement on generation quality mirrors human-human agreement patterns, influencing the reliability of LLM-as-a-judge metrics.
  • Fine-tuning does not eradicate this pattern and may even amplify dialect-related evaluation biases, though some models can still generate useful dialect-specific data to support scalability.
  • The authors call for careful data evaluation and the development of new tools to address scarcity and enable fair, inclusive improvement of LLMs for dialects.

Abstract

It is known that large language models (LLMs) underperform in English dialects, and that improving them is difficult due to data scarcity. In this work we investigate how quality and availability impact the feasibility of improving LLMs in this context. For this, we evaluate three rarely-studied English dialects (Yorkshire, Geordie, and Cornish), plus African-American Vernacular English, and West Frisian as control. We find that human-human agreement when determining LLM generation quality directly impacts LLM-as-a-judge performance. That is, LLM-human agreement mimics the human-human agreement pattern, and so do metrics such as accuracy. It is an issue because LLM-human agreement measures an LLM's alignment with the human consensus; and hence raises questions about the feasibility of improving LLM performance in locales where low populations induce low agreement. We also note that fine-tuning does not eradicate, and might amplify, this pattern in English dialects. But also find encouraging signals, such as some LLMs' ability to generate high-quality data, thus enabling scalability. We argue that data must be carefully evaluated to ensure fair and inclusive LLM improvement; and, in the presence of scarcity, new tools are needed to handle the pattern found.