KV Cache Optimization Strategies for Scalable and Efficient LLM Inference

arXiv cs.LG / 2026/3/24

💬 オピニオンSignals & Early TrendsIdeas & Deep AnalysisModels & Research

要点

  • The paper reviews how KV caches in Transformer-based LLMs reduce redundant computation during autoregressive generation, but also create a linear memory growth bottleneck as context lengths scale to millions of tokens.
  • It categorizes KV cache optimization methods into five directions—cache eviction, cache compression, hybrid memory, novel attention mechanisms, and combined approaches—and evaluates their trade-offs across memory use, throughput, and accuracy.
  • The analysis shows that performance depends strongly on deployment context, with the best approach varying by context length, hardware limits, and workload characteristics rather than any single universally superior technique.
  • It maps techniques to seven practical scenarios, including long-context requests, high-throughput datacenter serving, edge deployment, multi-turn chats, and accuracy-critical reasoning, offering guidance for selecting strategies.
  • The authors conclude that adaptive, multi-stage optimization pipelines are a promising direction to handle diverse workloads and constraints in real deployments.

Abstract

The key-value (KV) cache is a foundational optimization in Transformer-based large language models (LLMs), eliminating redundant recomputation of past token representations during autoregressive generation. However, its memory footprint scales linearly with context length, imposing critical bottlenecks on GPU memory capacity, memory bandwidth, and inference throughput as production LLMs push context windows from thousands to millions of tokens. Efficient KV cache management has thus become a first-order challenge for scalable LLM deployment. This paper provides a systematic review of recent KV cache optimization techniques, organizing them into five principal directions: cache eviction, cache compression, hybrid memory solutions, novel attention mechanisms, and combination strategies. For each category we analyze the underlying mechanisms, deployment trade-offs, and empirical performance across memory reduction, throughput, and model accuracy metrics. We further map techniques to seven practical deployment scenarios, including long-context single requests, high-throughput datacenter serving, edge devices, multi-turn conversations, and accuracy-critical reasoning, providing actionable guidance for practitioners selecting among competing approaches. Our analysis reveals that no single technique dominates across all settings; instead, the optimal strategy depends on context length, hardware constraints, and workload characteristics, pointing toward adaptive, multi-stage optimization pipelines as a promising direction for future research.