GroupGuard: A Framework for Modeling and Defending Collusive Attacks in Multi-Agent Systems
arXiv cs.AI / 3/17/2026
📰 NewsModels & Research
Key Points
- The authors propose GroupGuard, a training-free defense framework designed to detect and isolate collusive attackers in multi-agent systems powered by AI agents.
- They formalize group collusive attacks where multiple agents coordinate sociologically to mislead the system, and present GroupGuard as a multi-layered defense with graph-based monitoring, honeypot inducement, and structural pruning.
- Across five datasets and four topologies, group collusive attacks boosted attack success rates by up to 15% compared with individual attacks, and GroupGuard achieves detection accuracy up to 88% while restoring collaboration performance.
- The framework provides a robust approach to securing collaborative AI, with potential implications for safety in multi-agent deployments.
Related Articles

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA