AI Navigate

SemantiCache: Efficient KV Cache Compression via Semantic Chunking and Clustered Merging

arXiv cs.CL / 3/17/2026

📰 NewsModels & Research

Key Points

  • SemantiCache proposes a semantic-aware KV cache compression framework that preserves semantic integrity by aligning compression with language structure.
  • It partitions the cache into semantically coherent chunks using natural semantic boundaries and applies a Greedy Seed-Based Clustering within each chunk to form semantic clusters.
  • The clusters are merged into semantic cores and enhanced with a Proportional Attention mechanism to rebalance attention after merging.
  • Empirical results show decoding speedups up to 2.61x and substantial memory footprint reductions, with performance comparable to the original model.

Abstract

Existing KV cache compression methods generally operate on discrete tokens or non-semantic chunks. However, such approaches often lead to semantic fragmentation, where linguistically coherent units are disrupted, causing irreversible information loss and degradation in model performance. To address this, we introduce SemantiCache, a novel compression framework that preserves semantic integrity by aligning the compression process with the semantic hierarchical nature of language. Specifically, we first partition the cache into semantically coherent chunks by delimiters, which are natural semantic boundaries. Within each chunk, we introduce a computationally efficient Greedy Seed-Based Clustering (GSC) algorithm to group tokens into semantic clusters. These clusters are further merged into semantic cores, enhanced by a Proportional Attention mechanism that rebalances the reduced attention contributions of the merged tokens. Extensive experiments across diverse benchmarks and models demonstrate that SemantiCache accelerates the decoding stage of inference by up to 2.61 times and substantially reduces memory footprint, while maintaining performance comparable to the original model.