Quantifying and Predicting Disagreement in Graded Human Ratings

arXiv cs.CL / 5/5/2026

📰 NewsModels & Research

Key Points

  • The paper analyzes how disagreement varies across items in graded human ratings for inappropriate language, such as offensive, hate, and toxic language perception.
  • It tests whether annotation disagreement levels can be predicted using textual features and proposes an “Opposition Index” to measure annotator perspective opposition per item.
  • The results find a moderate positive correlation between estimated disagreement (variance) and observed annotation variance, indicating that text-based signals partially capture human disagreement.
  • Two variance-prediction approaches—directly predicting variance and inferring it from predicted annotation distributions—achieve comparable performance.
  • For predicting opposing perspectives, items with high Opposition Index values are harder to predict and models tend to underestimate these disagreements.

Abstract

It is increasingly recognized that human annotators do not always agree, and such disagreement is inherent in many annotation tasks. However, not all instances in a given task elicit the same degree of opinion divergence. In this paper, we investigate annotation variation patterns in graded human ratings for inappropriate languages, including offensive language, hate speech, and toxic language perception. We examine whether the degree of annotation disagreement can be predicted from textual features. We further propose the Opposition Index, a metric that quantifies perspective opposition among annotators on a given item, and investigate the predictability of instances with potentially opposing human opinions. Our results show a moderate positive correlation between estimated and observed annotation variance. We find that two approaches achieve comparable performance in variance prediction: directly predicting the variance value and estimating it from predicted annotation distributions. Our results on opposition perspective prediction show that items with high opposition index values are more difficult to predict and are often underestimated by models.