Scalable Prompt Routing via Fine-Grained Latent Task Discovery

arXiv cs.AI / 3/23/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper proposes a two-stage prompt routing architecture to select the best LLM from a pool of frontier models to optimize performance and cost.
  • Stage 1 uses graph-based clustering to discover latent task types and trains a classifier to assign prompts to these tasks, enabling fine-grained task understanding.
  • Stage 2 uses a mixture-of-experts with task-specific prediction heads to provide specialized quality estimates, with inference aggregating outputs from both stages to balance stability and adaptability.
  • Evaluation on 10 benchmarks with 11 frontier models shows the method consistently outperforms existing baselines and the strongest individual model while incurring less than half its cost.

Abstract

Prompt routing dynamically selects the most appropriate large language model from a pool of candidates for each query, optimizing performance while managing costs. As model pools scale to include dozens of frontier models with narrow performance gaps, existing approaches face significant challenges: manually defined task taxonomies cannot capture fine-grained capability distinctions, while monolithic routers struggle to differentiate subtle differences across diverse tasks. We propose a two-stage routing architecture that addresses these limitations through automated fine-grained task discovery and task-aware quality estimation. Our first stage employs graph-based clustering to discover latent task types and trains a classifier to assign prompts to discovered tasks. The second stage uses a mixture-of-experts architecture with task-specific prediction heads for specialized quality estimates. At inference, we aggregate predictions from both stages to balance task-level stability with prompt-specific adaptability. Evaluated on 10 benchmarks with 11 frontier models, our method consistently outperforms existing baselines and surpasses the strongest individual model while incurring less than half its cost.