AI Navigate

GroupGuard: A Framework for Modeling and Defending Collusive Attacks in Multi-Agent Systems

arXiv cs.AI / 3/17/2026

📰 NewsModels & Research

Key Points

  • The authors propose GroupGuard, a training-free defense framework designed to detect and isolate collusive attackers in multi-agent systems powered by AI agents.
  • They formalize group collusive attacks where multiple agents coordinate sociologically to mislead the system, and present GroupGuard as a multi-layered defense with graph-based monitoring, honeypot inducement, and structural pruning.
  • Across five datasets and four topologies, group collusive attacks boosted attack success rates by up to 15% compared with individual attacks, and GroupGuard achieves detection accuracy up to 88% while restoring collaboration performance.
  • The framework provides a robust approach to securing collaborative AI, with potential implications for safety in multi-agent deployments.

Abstract

While large language model-based agents demonstrate great potential in collaborative tasks, their interactivity also introduces security vulnerabilities. In this paper, we propose and model group collusive attacks, a highly destructive threat in which multiple agents coordinate via sociological strategies to mislead the system. To address this challenge, we introduce GroupGuard, a training-free defense framework that employs a multi-layered defense strategy, including continuous graph-based monitoring, active honeypot inducement, and structural pruning, to identify and isolate collusive agents. Experimental results across five datasets and four topologies demonstrate that group collusive attacks increase the attack success rate by up to 15\% compared to individual attacks. GroupGuard consistently achieves high detection accuracy (up to 88\%) and effectively restores collaborative performance, providing a robust solution for securing multi-agent systems.