ConsRoute:Consistency-Aware Adaptive Query Routing for Cloud-Edge-Device Large Language Models

arXiv cs.AI / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces ConsRoute, a consistency-aware adaptive query routing framework for cloud-edge-device LLM inference to reduce latency and inference cost without significantly degrading response quality.
  • ConsRoute improves routing decisions by using a reranker to measure fine-grained semantic consistency between responses from different model tiers, providing soft supervision beyond coarse output-quality gap estimates.
  • To keep edge-device overhead low, it reuses hidden states from the LLM’s prefilling stage as compact query representations, avoiding extra encoders or additional inference passes.
  • The method clusters these representations and uses Bayesian optimization to learn cluster-specific routing thresholds that balance quality, latency, and cost across heterogeneous query distributions.
  • Experiments report near-cloud quality performance (≥95%) while cutting end-to-end latency and inference cost by about 40%, outperforming prior routing baselines.

Abstract

Large language models (LLMs) deliver impressive capabilities but incur substantial inference latency and cost, which hinders their deployment in latency-sensitive and resource-constrained scenarios. Cloud-edge-device collaborative inference has emerged as a promising paradigm by dynamically routing queries to models of different capacities across tiers. In this paper, we propose ConsRoute, a lightweight, semantic-aware, and adaptive routing framework that significantly improves inference efficiency while minimizing impact on response quality. Unlike prior routing methods that rely on predicting coarse-grained output quality gaps, ConsRoute leverages a reranker to directly assess the semantic consistency between responses generated by models at different tiers, yielding fine-grained soft supervision signals for routing. To minimize device-side overhead, ConsRoute reuses hidden states from the LLM prefilling stage as compact query representations, avoiding additional encoders or inference passes. Furthermore, these representations are clustered, and Bayesian optimization is employed to learn cluster-specific routing thresholds that dynamically balance quality, latency, and cost under heterogeneous query distributions. Extensive experiments demonstrate that ConsRoute achieves near-cloud performance (>=95%) while reducing end-to-end latency and inference cost by nearly 40%, consistently outperforming existing routing baselines in both response quality and system efficiency.