From Ground Truth to Measurement: A Statistical Framework for Human Labeling

arXiv cs.CL / 4/10/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that supervised ML should treat human labels as a measurement process rather than assuming they are ground-truth, because labeling introduces systematic variation from ambiguity, interpretation differences, and mistakes.
  • It proposes a statistical framework that decomposes labeling outcomes into interpretable components: instance difficulty, annotator bias, situational noise, and relational alignment.
  • The framework extends classical measurement-error models to handle both shared and individualized notions of “truth,” allowing a diagnostic to determine which error regime best matches a given task.
  • Experiments on a multi-annotator natural language inference dataset find evidence for all four components and show the approach can improve understanding of what models actually learn.
  • The authors outline implications for data-centric ML and suggest the framework can support a more systematic “science of labeling.”

Abstract

Supervised machine learning assumes that labeled data provide accurate measurements of the concepts models are meant to learn. Yet in practice, human labeling introduces systematic variation arising from ambiguous items, divergent interpretations, and simple mistakes. Machine learning research commonly treats all disagreement as noise, which obscures these distinctions and limits our understanding of what models actually learn. This paper reframes annotation as a measurement process and introduces a statistical framework for decomposing labeling outcomes into interpretable sources of variation: instance difficulty, annotator bias, situational noise, and relational alignment. The framework extends classical measurement-error models to accommodate both shared and individualized notions of truth, reflecting traditional and human label variation interpretations of error, and provides a diagnostic for assessing which regime better characterizes a given task. Applying the proposed model to a multi-annotator natural language inference dataset, we find empirical evidence for all four theorized components and demonstrate the effectiveness of our approach. We conclude with implications for data-centric machine learning and outline how this approach can guide the development of a more systematic science of labeling.