Incentivizing High-Quality Human Annotations with Golden Questions

arXiv stat.ML / 4/15/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how to incentivize paid human annotators to produce high-quality data for LLM training by modeling the interaction between a company (principal) and annotators (agent) under limited quality monitoring.
  • It proposes a mechanism where annotators receive a bonus if a maximum-likelihood estimator (MLE) derived from sampled annotations passes a hypothesis test, linking incentives directly to measured quality.
  • The authors show that, in the principal-agent setting with strategic behavior, the hypothesis-testing detection rate is Θ(1/√(n log n)), which differs from traditional large-deviation results.
  • Based on the theory, the paper defines “golden questions” that should have high certainty and resemble normal annotation items in format, and selects such questions for human preference datasets.
  • Experiments indicate that golden questions reveal annotator behavior more effectively than conventional survey methods like instructed manipulation checks, improving incentive compatibility of quality assessment.

Abstract

Human-annotated data plays a vital role in training large language models (LLMs), such as supervised fine-tuning and human preference alignment. However, it is not guaranteed that paid human annotators produce high-quality data. In this paper, we study how to incentivize human annotators to do so. We start from a principal-agent model to model the dynamics between the company (the principal) and the annotator (the agent), where the principal can only monitor the annotation quality by examining n samples. We investigate the maximum likelihood estimators (MLE) and the corresponding hypothesis testing to incentivize annotators: the agent is given a bonus if the MLE passes the test. By analyzing the variance of the outcome, we show that the strategic behavior of the agent makes the hypothesis testing very different from traditional ones: Unlike the exponential rate proved by the large deviation theory, the principal-agent model's hypothesis testing rate is of \Theta(1/\sqrt{n \log n}). Our theory implies two criteria for the \emph{golden questions} to monitor the performance of the annotators: they should be of (1) high certainty and (2) similar format to normal ones. In that light, we select a set of golden questions in human preference data. By doing incentive-compatible experiments, we find out that the annotators' behavior is better revealed by those golden questions, compared to traditional survey techniques such as instructed manipulation checks.