Evaluating LLMs as Human Surrogates in Controlled Experiments

arXiv cs.AI / 4/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tests whether off-the-shelf LLMs can substitute for human participants in behavioral experiments by comparing LLM-generated responses with human responses from a known survey experiment.
  • Human observations are converted into structured prompts, and models produce a single 0–10 accuracy-perception outcome variable without any task-specific training.
  • The same statistical analysis is applied to both human and synthetic datasets to ensure a fair comparison of experimental inferences.
  • Results show that LLMs replicate several directionally consistent effects seen in humans, but effect sizes and moderation/interaction patterns differ across models.
  • Overall, the study suggests LLM-generated data can reflect aggregate belief-updating patterns under controlled settings, but they do not reliably match human-scale effects.

Abstract

Large language models (LLMs) are increasingly used to simulate human responses in behavioral research, yet it remains unclear when LLM-generated data support the same experimental inferences as human data. We evaluate this by directly comparing off-the-shelf LLM-generated responses with human responses from a canonical survey experiment on accuracy perception. Each human observation is converted into a structured prompt, and models generate a single 0--10 outcome variable without task-specific training; identical statistical analyses are applied to human and synthetic responses. We find that LLMs reproduce several directional effects observed in humans, but effect magnitudes and moderation patterns vary across models. Off-the-shelf LLMs therefore capture aggregate belief-updating patterns under controlled conditions but do not consistently match human-scale effects, clarifying when LLM-generated data can function as behavioral surrogates.