Continuous Semantic Caching for Low-Cost LLM Serving

arXiv cs.LG / 4/23/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a rigorous theoretical framework for semantic caching of LLM responses when user queries lie in an infinite, continuous embedding space rather than a finite set of discrete queries.
  • It introduces a dynamic ε-net discretization approach combined with Kernel Ridge Regression to estimate uncertainty and generalize partial feedback on query serving costs across similar semantic neighborhoods.
  • The authors develop both offline learning and online adaptive caching algorithms, explicitly accounting for switching costs when cached responses change.
  • They prove the online method achieves a sublinear regret bound versus an optimal continuous “oracle,” and show empirically that it closely matches the continuous optimum while lowering computational and switching overhead.
  • Overall, the work targets reducing LLM inference cost and latency by making semantic caching more practical and theoretically grounded under uncertainty in continuous query spaces.

Abstract

As Large Language Models (LLMs) become increasingly popular, caching responses so that they can be reused by users with semantically similar queries has become a vital strategy for reducing inference costs and latency. Existing caching frameworks have proposed to decide which query responses to cache by assuming a finite, known universe of discrete queries and learning their serving costs and arrival probabilities. As LLMs' pool of users and queries expands, however, such an assumption becomes increasingly untenable: real-world LLM queries reside in an infinite, continuous embedding space. In this paper, we establish the first rigorous theoretical framework for semantic LLM response caching in continuous query space under uncertainty. To bridge the gap between discrete optimization and continuous representation spaces, we introduce dynamic \epsilon-net discretization coupled with Kernel Ridge Regression. This design enables the system to formally quantify estimation uncertainty and generalize partial feedback on LLM query costs across continuous semantic query neighborhoods. We develop both offline learning and online adaptive algorithms optimized to reduce switching costs incurred by changing the cached responses. We prove that our online algorithm achieves a sublinear regret bound against an optimal continuous oracle, which reduces to existing bounds for discrete query models. Extensive empirical evaluations demonstrate that our framework approximates the continuous optimal cache well while also reducing computational and switching overhead compared to existing methods.