Detecting Multi-Agent Collusion Through Multi-Agent Interpretability

arXiv cs.AI / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Introduces NARCBench, a benchmark for detecting multi-agent collusion (deceptive coordination) under environment distribution shift, filling a gap beyond single-agent deception probes.
  • Proposes five multi-agent interpretability/probing techniques that aggregate per-agent deception scores to classify group-level collusion scenarios.
  • Reports strong in-distribution performance (1.00 AUROC) but reduced zero-shot transfer performance (0.60–0.86 AUROC) across structurally different multi-agent settings and a steganographic blackjack task.
  • Finds that no single probing method works best for all collusion types, implying different collusion strategies produce distinct activation-space signatures.
  • Provides preliminary evidence that collusion-related signals may be localized at the token level, with colluding agents showing activation spikes when processing encoded parts of partners’ messages, and releases code/data for evaluation.

Abstract

As LLM agents are increasingly deployed in multi-agent systems, they introduce risks of covert coordination that may evade standard forms of human oversight. While linear probes on model activations have shown promise for detecting deception in single-agent settings, collusion is inherently a multi-agent phenomenon, and the use of internal representations for detecting collusion between agents remains unexplored. We introduce NARCBench, a benchmark for evaluating collusion detection under environment distribution shift, and propose five probing techniques that aggregate per-agent deception scores to classify scenarios at the group level. Our probes achieve 1.00 AUROC in-distribution and 0.60--0.86 AUROC when transferred zero-shot to structurally different multi-agent scenarios and a steganographic blackjack card-counting task. We find that no single probing technique dominates across all collusion types, suggesting that different forms of collusion manifest differently in activation space. We also find preliminary evidence that this signal is localised at the token level, with the colluding agent's activations spiking specifically when processing the encoded parts of their partner's message. This work takes a step toward multi-agent interpretability: extending white-box inspection from single models to multi-agent contexts, where detection requires aggregating signals across agents. These results suggest that model internals provide a complementary signal to text-level monitoring for detecting multi-agent collusion, particularly for organisations with access to model activations. Code and data are available at https://github.com/aaronrose227/narcbench.