How Annotation Trains Annotators: Competence Development in Social Influence Recognition
arXiv cs.CL / 2026/4/6
💬 オピニオンSignals & Early TrendsIdeas & Deep AnalysisModels & Research
要点
- The paper studies how annotators’ judgment quality changes over time in a subjective social-influence recognition task, treating competence development as a key lens rather than fixed “ground truth.”
- Using 25 annotators across expert and non-expert groups, the authors annotated 1,021 dialogues with 20 social influence techniques plus intentions, reactions, and consequences, and re-annotated an initial 150-text subset before vs. after the process for comparison.
- The study combines qualitative/quantitative assessments, interviews, self-assessment surveys, and LLM-based training/evaluation to measure competence shifts and their downstream effects.
- Results show a significant increase in annotators’ self-perceived competence and confidence, with measurable improvements in annotation quality—especially for expert groups.
- The authors find that these competence-driven annotation changes meaningfully affect the performance of LLMs trained on the resulting labeled data.




