広告

Fake users generated by AI can't simulate humans — review of 182 research papers. Your thoughts?

Reddit r/artificial / 2026/3/31

💬 オピニオンSignals & Early TrendsIdeas & Deep AnalysisModels & Research

要点

  • A growing trend is replacing costly real human recruitment with LLM-generated “synthetic participants/users” for surveys, app testing, and opinion collection.
  • A systematic review of 182 research papers finds that synthetic users generated by AI generally fail to represent human cognition and behavior accurately.
  • The findings caution against using these AI-generated participants as substitutes for real humans in human-subject-like evaluation settings.
  • The review highlights a broader reliability concern: even if synthetic participants are convenient, their outputs may not capture the complexity of human decision-making and interactions.

There’s a massive trend right now where tech companies, businesses, even researchers are trying to replace real human feedback with Large Language Models (LLMs) so called synthetic participants/users.

The idea is sounds great - why spend money and time recruiting real people to take surveys, test apps, or give opinions when you can just prompt ChatGPT to pretend to be a thousand different customers?

A new systematic literature review analyzing 182 research papers just dropped to see if these "synthetic participants" can simulate humans.

The short answer?
They are bad at representing human cognition and behavior and you probably should not use them this way.

submitted by /u/Complete_Answer
[link] [comments]

広告