KV Packet: Recomputation-Free Context-Independent KV Caching for LLMs

arXiv cs.AI / 4/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the problem that conventional KV caches in LLM inference are context-dependent, forcing costly KV recomputation when reusing cached documents in new contexts.
  • It proposes KV Packet, a recomputation-free cache reuse framework that treats cached documents as immutable “packets” augmented with lightweight trainable soft-token adapters.
  • The adapters are trained using self-supervised distillation to bridge attention/distribution discontinuities caused by context changes.
  • Experiments on Llama-3.1 and Qwen2.5 show near-zero additional FLOPs and improved time-to-first-token (TTFT) versus recomputation-based methods.
  • The approach maintains task performance, achieving F1 scores comparable to full recomputation baselines while reducing overhead.

Abstract

Large Language Models (LLMs) rely heavily on Key-Value (KV) caching to minimize inference latency. However, standard KV caches are context-dependent: reusing a cached document in a new context requires recomputing KV states to account for shifts in attention distribution. Existing solutions such as CacheBlend, EPIC, and SAM-KV mitigate this issue by selectively recomputing a subset of tokens; however, they still incur non-negligible computational overhead (FLOPs) and increased Time-to-First-Token (TTFT) latency. In this paper, we propose KV Packet, a recomputation-free cache reuse framework that treats cached documents as immutable ``packets'' wrapped in light-weight trainable soft-token adapters, which are trained via self-supervised distillation to bridge context discontinuities. Experiments on Llama-3.1 and Qwen2.5 demonstrate that the proposed KV Packet method achieves near-zero FLOPs and lower TTFT than recomputation-based baselines, while retaining F1 scores comparable to those of the full recomputation baseline.