An experimental study of KV cache reuse strategies in chunk-level caching systems

arXiv cs.CL / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies chunk-level caching (CLC) for retrieval-augmented generation, where KV caches are precomputed for retrieved text chunks to speed up LLM inference.
  • It finds that existing CLC approaches can have fundamental limitations because KV caches may fail to capture cross-attention dependencies between chunks, potentially hurting output quality.
  • The authors provide extensive experimental evaluation of current CLC system designs to quantify accuracy limits and applicability constraints.
  • They also conclude that different CLC techniques can be complementary, and they propose a redesigned CLC approach that combines them to improve accuracy.

Abstract

Retrieval-augmented generation improves large language models' accuracy by adding relevant retrieved text to the prompt. Chunk level caching (CLC) accelerates inference by precomputing KV caches for these retrieved chunks and reusing them. However, these caches miss cross-attention dependencies between chunks, which can reduce output quality. Several methods try to improve CLC accuracy using different techniques. We make two main contributions. First, we show that existing CLC approaches have fundamental limitations that limit their accuracy or their applicability. We back this conclusion with an extensive CLC system experimental evaluation. Second, we observe that existing CLC techniques are complementary. We leverage this insight to propose a new CLC design that carefully combines them and achieves better accuracy.
広告