Survey Response Generation: Generating Closed-Ended Survey Responses In-Silico with Large Language Models
arXiv cs.CL / 4/27/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper examines how different Survey Response Generation (SRG) methods affect the quality of closed-ended survey responses generated in-silico by LLMs, despite LLMs being trained primarily for open-ended text.
- Using 32 million simulated responses, the study compares 8 SRG methods across 4 political attitude survey tasks and 10 open-weight language models.
- The results show that SRG method choice leads to significant differences in alignment at both the individual level and the subpopulation level.
- Restricted Generation Methods deliver the best overall performance, while providing reasoning output does not reliably improve alignment.
- The authors offer practical recommendations for selecting and applying SRG methods when using LLMs to simulate survey responses.
Related Articles

Legal Insight Transformation: 7 Mistakes to Avoid When Adopting AI Tools
Dev.to

Legal Insight Transformation: Traditional vs. AI-Driven Research Compared
Dev.to

Legal Insight Transformation: A Beginner's Guide to Modern Research
Dev.to
I tested the same prompt across multiple AI models… the differences surprised me
Reddit r/artificial

The five loops between AI coding and AI engineering
Dev.to