LLM Agents Predict Social Media Reactions but Do Not Outperform Text Classifiers: Benchmarking Simulation Accuracy Using 120K+ Personas of 1511 Humans

arXiv cs.CL / 4/23/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The study benchmarks LLM-based autonomous agents for predicting specific individuals’ social media reactions (like, dislike, comment, share, no reaction) using 120,000+ persona-content combinations from 1,511 Serbian participants and 27 LLMs.
  • In Study 1, the agents reach 70.7% overall accuracy, with LLM choice causing up to a 13 percentage-point spread in performance.
  • In Study 2, binary like/dislike evaluation yields MCC=0.29, showing predictive signal beyond chance, but standard TF-IDF text classifiers do better with MCC=0.36.
  • The results suggest that any predictive improvement likely comes from semantic access rather than uniquely “agentic” reasoning, while the ability of zero-shot persona-prompted agents raises governance risks of manipulative AI swarms.
  • The paper argues these agents could still be useful in simulation for polarization dynamics and policy design, though the findings are limited by single-country sampling and warrant multilingual and fine-tuned follow-ups.

Abstract

Social media platforms mediate how billions form opinions and engage with public discourse. As autonomous AI agents increasingly participate in these spaces, understanding their behavioral fidelity becomes critical for platform governance and democratic resilience. Previous work demonstrates that LLM-powered agents can replicate aggregate survey responses, yet few studies test whether agents can predict specific individuals' reactions to specific content. This study benchmarks LLM-based agents' accuracy in predicting human social media reactions (like, dislike, comment, share, no reaction) across 120,000+ unique agent-persona combinations derived from 1,511 Serbian participants and 27 large language models. In Study 1, agents achieved 70.7% overall accuracy, with LLM choice producing a 13 percentage-point performance spread. Study 2 employed binary forced-choice (like/dislike) evaluation with chance-corrected metrics. Agents achieved Matthews Correlation Coefficient (MCC) of 0.29, indicating genuine predictive signal beyond chance. However, conventional text-based supervised classifiers using TF-IDF representations outperformed LLM agents (MCC of 0.36), suggesting predictive gains reflect semantic access rather than uniquely agentic reasoning. The genuine predictive validity of zero-shot persona-prompted agents warns against potential manipulation through easily deploying swarms of behaviorally distinct AI agents on social media, while simultaneously offering opportunities to use such agents in simulations for predicting polarization dynamics and informing AI policy. The advantage of using zero-shot agents is that they require no task-specific training, making their large-scale deployment easy across diverse contexts. Limitations include single-country sampling. Future research should explore multilingual testing and fine-tuning approaches.