There’s a massive trend right now where tech companies, businesses, even researchers are trying to replace real human feedback with Large Language Models (LLMs) so called synthetic participants/users.
The idea is sounds great - why spend money and time recruiting real people to take surveys, test apps, or give opinions when you can just prompt ChatGPT to pretend to be a thousand different customers?
A new systematic literature review analyzing 182 research papers just dropped to see if these "synthetic participants" can simulate humans.
The short answer?
They are bad at representing human cognition and behavior and you probably should not use them this way.
[link] [comments]
