Beyond Black-Box Labels: Interpretable Criteria for Diagnosing SubjectiveNLP Tasks

arXiv cs.CL / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a core limitation of subjective NLP datasets: collapsing multiple annotator judgments into a single gold label can hide why disagreement occurs.
  • It introduces a schema-level diagnostic that evaluates expert-designed annotation schemas before committing to gold labels, using only multi-annotator criterion judgments.
  • The method distinguishes two distinct failure modes: unstable, hard-to-operationalize criteria versus systematic category overlap that blurs mutually exclusive labels.
  • In a persuasive value extraction task on commercial documents, disagreement is concentrated in a small set of criteria, and about half of sentences trigger multiple categories.
  • The diagnostic provides evidence to help teams refine annotation guidelines, adjust the category structure, or even reconsider the overall annotation paradigm.

Abstract

Subjective NLP datasets typically aggregate annotator judgments into a single gold label, making it difficult to diagnose whether disagreement reflects unclear criteria, collapsed distinctions, or legitimate plurality. We propose a \emph{schema-level diagnostic} for auditing expert-designed annotation schemas \emph{prior to} gold-label commitment, using only multi-annotator criterion judgments. The diagnostic separates two failure modes: unstable criteria with hard-to-operationalize boundaries, and systematic overlap that blurs the boundaries between mutually exclusive categories. Applied to persuasive value extraction in commercial documents, we find that disagreement is not diffuse: instability concentrates in a few criteria, while nearly half of covered sentences activate multiple categories. These signals align with where domain experts disagree, yielding an evidence-based audit for tightening guidelines, revising category structure, or reconsidering the annotation paradigm.