CascadeDebate: Multi-Agent Deliberation for Cost-Aware LLM Cascades

arXiv cs.CL / 4/15/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • CascadeDebate proposes a cost-aware LLM cascading framework that reduces premature escalations caused by ambiguous queries and under-confidence at each tier’s decision boundary.
  • It inserts multi-agent deliberation only when a confidence-based router detects uncertainty, so lightweight agent ensembles resolve ambiguities before higher-cost model upgrades or expert handoffs.
  • The architecture dynamically varies test-time compute by alternating between single-model inference and selective multi-agent deliberation across model scales.
  • Experiments on five benchmarks across science, medicine, and general knowledge show up to 26.75% improvement over strong single-model cascades and standalone multi-agent systems.
  • An online threshold optimizer is highlighted as crucial for robust performance, delivering large gains (20.98% to 52.33% relative improvement) versus fixed escalation policies and better adapting to real-world query distributions.

Abstract

Cascaded LLM systems coordinate models of varying sizes with human experts to balance accuracy, cost, and abstention under uncertainty. However, single-model tiers at each stage often struggle with ambiguous queries, triggering premature escalations to costlier models or experts due to under-confidence and inefficient compute scaling. CascadeDebate addresses this gap by inserting multi-agent deliberation directly at each tier's escalation boundary. Confidence-based routers activate lightweight agent ensembles only for uncertain cases, enabling consensus-driven resolution of ambiguities internally without invoking higher-cost upgrades. Our unified architecture alternates single-model inference with selective multi-agent deliberation across model scales, culminating in human experts as the final fallback. This design scales test-time compute dynamically according to query difficulty. Across five benchmarks spanning science, medicine, and general knowledge, CascadeDebate outperforms strong single-model cascades and standalone multi-agent systems by up to 26.75 percent. An online threshold optimizer proves essential, boosting accuracy by 20.98 to 52.33 percent relative improvement over fixed policies and enabling elastic adaptation to real-world distributions.