Latent Agents: A Post-Training Procedure for Internalized Multi-Agent Debate

arXiv cs.AI / 4/29/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper introduces “Latent Agents,” a post-training method that distills multi-agent debate into a single LLM to reduce the heavy compute cost of generating long debate transcripts.
  • A two-stage fine-tuning pipeline uses debate-structure learning plus dynamic reward scheduling and length clipping, achieving performance that matches or exceeds explicit multi-agent debate while using up to 93% fewer tokens.
  • Mechanistic analysis via activation steering suggests internalization produces agent-specific subspaces in the model’s activation space, with interpretable directions corresponding to different agent perspectives.
  • The authors show a control-oriented application: malicious agents can be instilled through internalized debate and then suppressed via negative steering, making harmful behaviors easier to localize and manage with smaller overall performance degradation than methods applied to base models.
  • The work includes released code, enabling reproducibility and further experimentation with distilled internalized reasoning behaviors.

Abstract

Multi-agent debate has been shown to improve reasoning in large language models (LLMs). However, it is compute-intensive, requiring generation of long transcripts before answering questions. To address this inefficiency, we develop a framework that distills multi-agent debate into a single LLM through a two-stage fine-tuning pipeline combining debate structure learning with internalization via dynamic reward scheduling and length clipping. Across multiple models and benchmarks, our internalized models match or exceed explicit multi-agent debate performance using up to 93% fewer tokens. We then investigate the mechanistic basis of this capability through activation steering, finding that internalization creates agent-specific subspaces: interpretable directions in activation space corresponding to different agent perspectives. We further demonstrate a practical application: by instilling malicious agents into the LLM through internalized debate, then applying negative steering to suppress them, we show that distillation makes harmful behaviors easier to localize and control with smaller reductions in general performance compared to steering base models. Our findings offer a new perspective for understanding multi-agent capabilities in distilled models and provide practical guidelines for controlling internalized reasoning behaviors. Code available at https://github.com/johnsk95/latent_agents