Superminds Test: Actively Evaluating Collective Intelligence of Agent Society via Probing Agents

arXiv cs.AI / 4/27/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether collective intelligence emerges spontaneously as large language model agents scale to millions within an autonomous agent society.
  • Using the MoltBook platform with over two million agents, the authors propose the “Superminds Test,” a hierarchical evaluation framework that employs controlled probing agents across three tiers: joint reasoning, information synthesis, and basic interaction.
  • Experimental results show a marked absence of collective intelligence, with the society not outperforming individual frontier models on complex reasoning tasks.
  • The study finds limited evidence of distributed information synthesis and frequent failures even on relatively trivial coordination tasks.
  • Platform-wide interaction analysis indicates interactions are shallow—threads seldom go beyond a single reply and many responses are generic or off-topic—suggesting sparse, shallow communication is the main bottleneck.

Abstract

Collective intelligence refers to the ability of a group to achieve outcomes beyond what any individual member can accomplish alone. As large language model agents scale to populations of millions, a key question arises: Does collective intelligence emerge spontaneously from scale? We present the first empirical evaluation of this question in a large-scale autonomous agent society. Studying MoltBook, a platform hosting over two million agents, we introduce Superminds Test, a hierarchical framework that probes society-level intelligence using controlled Probing Agents across three tiers: joint reasoning, information synthesis, and basic interaction. Our experiments reveal a stark absence of collective intelligence. The society fails to outperform individual frontier models on complex reasoning tasks, rarely synthesizes distributed information, and often fails even trivial coordination tasks. Platform-wide analysis further shows that interactions remain shallow, with threads rarely extending beyond a single reply and most responses being generic or off-topic. These results suggest that collective intelligence does not emerge from scale alone. Instead, the dominant limitation of current agent societies is extremely sparse and shallow interaction, which prevents agents from exchanging information and building on each other's outputs.