Fine-Grained Perspectives: Modeling Explanations with Annotator-Specific Rationales

arXiv cs.CL / 4/24/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a framework that jointly models annotator-specific label predictions and the explanations (rationales) those annotators provide, using them as fine-grained signals of individual perspectives.
  • It introduces a training/prediction setup that conditions on both annotator identity and demographic metadata via a representation-level “User Passport” mechanism, aiming to better personalize model behavior.
  • Two explainer architectures are presented: a post-hoc prompt-based explainer and a prefixed bridge explainer that transfers annotator-conditioned classifier representations into a generative model.
  • Experiments on an NLI dataset with disaggregated annotations and annotator explanations show that explanation-aware modeling improves predictive performance, with the prefixed bridge method producing more stable label alignment and semantic consistency, while the post-hoc method yields stronger lexical similarity.
  • Overall, the work advances perspectivist modeling by integrating annotator-specific rationales into both the predictive and generative parts of the system to represent disagreement more faithfully.

Abstract

Beyond exploring disaggregated labels for modeling perspectives, annotator rationales provide fine-grained signals of individual perspectives. In this work, we propose a framework for jointly modeling annotator-specific label prediction and corresponding explanations, fine-tuned on the annotators' provided rationales. Using a dataset with disaggregated natural language inference (NLI) annotations and annotator-provided explanations, we condition predictions on both annotator identity and demographic metadata through a representation-level User Passport mechanism. We further introduce two explainer architectures: a post-hoc prompt-based explainer and a prefixed bridge explainer that transfers annotator-conditioned classifier representations directly into a generative model. This design enables explanation generation aligned with individual annotator perspectives. Our results show that incorporating explanation modeling substantially improves predictive performance over a baseline annotator-aware classifier, with the prefixed bridge approach achieving more stable label alignment and higher semantic consistency, while the post-hoc approach yields stronger lexical similarity. These findings indicate that modeling explanations as expressions of fine-grained perspective provides a richer and more faithful representation of disagreement. The proposed approaches advance perspectivist modeling by integrating annotator-specific rationales into both predictive and generative components.