Structured Disagreement in Health-Literacy Annotation: Epistemic Stability, Conceptual Difficulty, and Agreement-Stratified Inference
arXiv cs.CL / 4/23/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that common NLP annotation pipelines assume a single ground-truth label and often treat disagreement as noise, whereas perspectivist approaches view disagreement as potentially informative.
- Using 6,323 graded health-literacy annotations from open-ended COVID-19 responses in Ecuador and Peru, annotators assigned proportional correctness scores against normative public-health guidelines to capture the full distribution of judgments.
- Variance decomposition shows that question-level conceptual difficulty explains substantially more disagreement than annotator identity, suggesting disagreement is driven by the task rather than individual raters.
- Agreement-stratified results indicate that effects such as country, education, and urban-rural differences can change magnitude and even reverse direction at different levels of inter-annotator agreement.
- The authors conclude that graded interpretive tasks contain both epistemically stable and unstable components, and that strong perspectivist modeling is statistically necessary to avoid misleading conclusions from aggregated labels.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to