Attribution Bias in Large Language Models

arXiv cs.AI / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces AttriBench, a new quote-attribution benchmark dataset that is balanced across both author fame and demographics to study attribution fairness in a controlled way.
  • Evaluating 11 widely used LLMs under different prompting setups shows quote attribution remains difficult even for frontier models.
  • The study finds large, systematic attribution-accuracy disparities across race, gender, and intersectional demographic groups.
  • It identifies and analyzes “suppression,” a failure mode where models omit attribution entirely despite having authorship information, and shows suppression is common and uneven across demographic groups.
  • The authors propose quote attribution as a benchmark for representational fairness, highlighting gaps that standard accuracy metrics can miss.

Abstract

As Large Language Models (LLMs) are increasingly used to support search and information retrieval, it is critical that they accurately attribute content to its original authors. In this work, we introduce AttriBench, the first fame- and demographically-balanced quote attribution benchmark dataset. Through explicitly balancing author fame and demographics, AttriBench enables controlled investigation of demographic bias in quote attribution. Using this dataset, we evaluate 11 widely used LLMs across different prompt settings and find that quote attribution remains a challenging task even for frontier models. We observe large and systematic disparities in attribution accuracy between race, gender, and intersectional groups. We further introduce and investigate suppression, a distinct failure mode in which models omit attribution entirely, even when the model has access to authorship information. We find that suppression is widespread and unevenly distributed across demographic groups, revealing systematic biases not captured by standard accuracy metrics. Our results position quote attribution as a benchmark for representational fairness in LLMs.