Agent Capsules: Quality-Gated Granularity Control for Multi-Agent LLM Pipelines

arXiv cs.AI / 5/4/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Agent Capsules, an adaptive execution runtime for multi-agent LLM pipelines that formulates agent-group dispatch as an optimization problem with empirical quality constraints.
  • It monitors coordination overhead and uses output-quality scoring to decide when to use compound execution strategies versus reverting to finer, per-agent dispatch when quality would drop.
  • A controlled negative result shows that simply injecting more context into merged calls can worsen prompt compression, so the system improves quality by escalating through an execution ladder rather than rewriting merged prompts.
  • In experiments on a 14-agent competitive intelligence pipeline (LangGraph), Agent Capsules reduces fine-mode input tokens by 51% and compound-mode tokens by 42% while improving quality slightly.
  • On a 5-agent due diligence pipeline (DSPy), it cuts tokens versus uncompiled DSPy and significantly outperforms MIPROv2 on token usage at higher judged quality, without training data or per-pipeline engineering.

Abstract

A multi-agent pipeline with N agents typically issues N LLM calls per run. Merging agents into fewer calls (compound execution) promises token savings, but naively merged calls silently degrade quality through tool loss and prompt compression. We present Agent Capsules, an adaptive execution runtime that treats multi-agent pipeline execution as an optimization problem with empirical quality constraints. The runtime instruments coordination overhead per group, scores composition opportunity, selects among three compound execution strategies, and gates every mode switch on rolling-mean output quality. A controlled negative result confirms that injecting more context into a merged call worsens compression rather than relieving it, so the framework's escalation ladder (standard, then two-phase, then sequential) recovers quality by moving toward per-agent dispatch rather than by rewriting merged prompts. On LLM-judged quality, the controller matches a hand-tuned oracle on every measured (model, group, mode) cell: routing compound whenever the oracle would, and reverting to fine whenever quality would fail the floor, without per-model configuration. Against a hand-crafted LangGraph implementation of a 14-agent competitive intelligence pipeline, Agent Capsules uses 51% fewer fine-mode input tokens and 42% fewer compound-mode input tokens, at +0.020 and +0.017 quality respectively. Against a DSPy implementation of a 5-agent due diligence pipeline, the framework uses 19% fewer tokens than uncompiled DSPy at quality parity, and 68% fewer tokens than MIPROv2 at +0.052 quality. Even before compound mode fires, the runtime delivers efficiency through automatic policy resolution, cache-aligned prompts, and topology-aware context injection, matching both hand-tuned and compile-time baselines without training data or per-pipeline engineering.