TTKV: Temporal-Tiered KV Cache for Long-Context LLM Inference

arXiv cs.LG / 4/23/2026

💬 OpinionDeveloper Stack & InfrastructureModels & Research

Key Points

  • KV caching is a key technique for LLM inference, but its memory cost grows linearly with context length, creating a major scalability bottleneck.
  • The paper proposes TTKV, which borrows ideas from human memory by treating KV states of different ages as having different importance and accessibility requirements.
  • TTKV partitions the KV cache into temporal tiers with different capacities and precisions, using HBM/DRAM separation to implement fast vs. slow storage.
  • It also introduces block-wise streaming attention to overlap communication and computation when accessing slower tiers.
  • Experiments on 128K-context tasks show 5.94× reduced cross-tier traffic, up to 76% latency reduction, and 2× throughput improvement versus strong baselines.

Abstract

Key-value (KV) caching is critical for efficient inference in large language models (LLMs), yet its memory footprint scales linearly with context length, resulting in a severe scalability bottleneck. Existing approaches largely treat KV states as equally important across time, implicitly assuming uniform precision and accessibility. However, this assumption contrasts with human memory systems, where memories vary in clarity, recall frequency, and relevance with temporal proximity.Motivated by this insight, we propose TTKV, a KV cache management framework that maps the human memory system onto the KV cache. TTKV partitions the KV cache into temporal tiers with heterogeneous capacity and precision. The design addresses three aspects: (1) Tier Layout, decoupling fast and slow memory using HBM and DRAM; (2) Tier Content, assigning more recent KV states to faster, higher-precision tiers based on temporal proximity; and (3) Tier Interaction, employing block-wise streaming attention to overlap communication and computation when accessing slow tiers. Experiments show that TTKV reduces cross-tier traffic by 5.94x on 128K-context tasks, achieving up to 76% latency reduction and 2x throughput improvement over strong baselines.