How memory can affect collective and cooperative behaviors in an LLM-Based Social Particle Swarm

arXiv cs.AI / 4/15/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how memory in LLM-based agents affects emergent collective and cooperative behavior in a multi-agent Social Particle Swarm (SPS) setup playing the Prison’s Dilemma with neighbors.
  • With Gemini-2.0-Flash agents, increasing memory length shifts the system from stable cooperation to cyclical cluster formation/collapse and eventually to scattered defection, with even minimal memory strongly suppressing cooperation.
  • With Gemma 3:4b, the trend reverses: longer memory promotes cooperation and leads to dense cooperative clusters.
  • The authors find that Big Five personality scores partially correlate with agent behaviors in ways consistent with human-participant experiments, supporting the model’s validity.
  • Sentiment analysis suggests the two LLMs interpret longer memory differently (Gemini grows more negative, Gemma less negative), providing a micro-level cognitive explanation for prior contradictory results on memory and cooperation.

Abstract

This study examines how model-specific characteristics of Large Language Model (LLM) agents, including internal alignment, shape the effect of memory on their collective and cooperative dynamics in a multi-agent system. To this end, we extend the Social Particle Swarm (SPS) model, in which agents move in a two-dimensional space and play the Prisoner's Dilemma with neighboring agents, by replacing its rule-based agents with LLM agents endowed with Big Five personality scores and varying memory lengths. Using Gemini-2.0-Flash, we find that memory length is a critical parameter governing collective behavior: even a minimal memory drastically suppressed cooperation, transitioning the system from stable cooperative clusters through cyclical formation and collapse of clusters to a state of scattered defection as memory length increased. Big Five personality traits correlated with agent behaviors in partial agreement with findings from experiments with human participants, supporting the validity of the model. Comparative experiments using Gemma~3:4b revealed the opposite trend: longer memory promoted cooperation, accompanied by the formation of dense cooperative clusters. Sentiment analysis of agents' reasoning texts showed that Gemini interprets memory increasingly negatively as its length grows, while Gemma interprets it less negatively, and that this difference persists in the early phase of experiments before the macro-level dynamics converge. These results suggest that model-specific characteristics of LLMs, potentially including alignment, play a fundamental role in determining emergent social behavior in Generative Agent-Based Modeling, and provide a micro-level cognitive account of the contradictions found in prior work on memory and cooperation.

How memory can affect collective and cooperative behaviors in an LLM-Based Social Particle Swarm | AI Navigate