Do LLM-Driven Agents Exhibit Engagement Mechanisms? Controlled Tests of Information Load, Descriptive Norms, and Popularity Cues

arXiv cs.AI / 2026/3/24

💬 オピニオンIdeas & Deep AnalysisModels & Research

要点

  • The study tests whether LLM-driven agents in a Weibo-like simulation exhibit engagement mechanisms that match interpretable social-psychology hypotheses rather than just producing plausible text behavior.
  • Researchers manipulate information load and descriptive norms while letting popularity cues (likes and reshares) evolve endogenously to preserve realistic bandwagon feedback.
  • Results show simulated engagement changes systematically with information load and descriptive norms, supporting some theory-consistent behavioral sensitivity.
  • Sensitivity to popularity cues is found to be context-dependent, suggesting the agents follow mechanisms conditionally rather than through rigid prompt compliance.
  • The paper outlines methodological lessons for simulation-based communication research, including multi-condition “stress tests” and the need for explicit no-norm baselines because default prompts aren’t true blank controls.

Abstract

Large language models make agent-based simulation more behaviorally expressive, but they also sharpen a basic methodological tension: fluent, human-like output is not, by itself, evidence for theory. We evaluate what an LLM-driven simulation can credibly support using information engagement on social media as a test case. In a Weibo-like environment, we manipulate information load and descriptive norms, while allowing popularity cues (cumulative likes and Sina Weibo-style cumulative reshares) to evolve endogenously. We then ask whether simulated behavior changes in theoretically interpretable ways under these controlled variations, rather than merely producing plausible-looking traces. Engagement responds systematically to information load and descriptive norms, and sensitivity to popularity cues varies across contexts, indicating conditionality rather than rigid prompt compliance. We discuss methodological implications for simulation-based communication research, including multi-condition stress tests, explicit no-norm baselines because default prompts are not blank controls, and design choices that preserve endogenous feedback loops when studying bandwagon dynamics.

Do LLM-Driven Agents Exhibit Engagement Mechanisms? Controlled Tests of Information Load, Descriptive Norms, and Popularity Cues | AI Navigate