How Annotation Trains Annotators: Competence Development in Social Influence Recognition

arXiv cs.CL / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how annotators’ judgment quality changes over time in a subjective social-influence recognition task, treating competence development as a key lens rather than fixed “ground truth.”
  • Using 25 annotators across expert and non-expert groups, the authors annotated 1,021 dialogues with 20 social influence techniques plus intentions, reactions, and consequences, and re-annotated an initial 150-text subset before vs. after the process for comparison.
  • The study combines qualitative/quantitative assessments, interviews, self-assessment surveys, and LLM-based training/evaluation to measure competence shifts and their downstream effects.
  • Results show a significant increase in annotators’ self-perceived competence and confidence, with measurable improvements in annotation quality—especially for expert groups.
  • The authors find that these competence-driven annotation changes meaningfully affect the performance of LLMs trained on the resulting labeled data.

Abstract

Human data annotation, especially when involving experts, is often treated as an objective reference. However, many annotation tasks are inherently subjective, and annotators' judgments may evolve over time. This study investigates changes in the quality of annotators' work from a competence perspective during a process of social influence recognition. The study involved 25 annotators from five different groups, including both experts and non-experts, who annotated a dataset of 1,021 dialogues with 20 social influence techniques, along with intentions, reactions, and consequences. An initial subset of 150 texts was annotated twice - before and after the main annotation process - to enable comparison. To measure competence shifts, we combined qualitative and quantitative analyses of the annotated data, semi-structured interviews with annotators, self-assessment surveys, and Large Language Model training and evaluation on the comparison dataset. The results indicate a significant increase in annotators' self-perceived competence and confidence. Moreover, observed changes in data quality suggest that the annotation process may enhance annotator competence and that this effect is more pronounced in expert groups. The observed shifts in annotator competence have a visible impact on the performance of LLMs trained on their annotated data.