AI Navigate

Where Matters More Than What: Decoding-aligned KV Cache Compression via Position-aware Pseudo Queries

arXiv cs.CL / 3/13/2026

📰 NewsModels & Research

Key Points

  • The paper identifies that KV cache memory usage grows with context length and proposes DapQ to compress it by evicting tokens using decoding-aligned, position-aware pseudo queries.
  • It shows that positional information matters more than semantic content for constructing the pseudo queries, enabling an observation window that mirrors the decoding process.
  • DapQ simulates output tokens to align the observation window with generation, allowing more accurate token eviction during inference.
  • Experimental results across multiple benchmarks and LLMs demonstrate strong gains under tight memory budgets, including near-lossless performance (99.5%) on NIAH with 3% KV cache budget.

Abstract

The Key-Value (KV) cache is crucial for efficient Large Language Models (LLMs) inference, but excessively long contexts drastically increase KV cache memory footprint. Existing KV cache compression methods typically rely on input-side attention patterns within a prompt observation window to estimate token importance during the prefill stage. They fail to preserve critical tokens for future generation since these assessments are not derived from the decoding process. Intuitively, an effective observation window should mirror the decoding-stage queries to accurately reflect which tokens the generation process will attend to. However, ground-truth decoding queries are inherently unavailable during inference. For constructing pseudo queries to approximate them, we find that positional information plays a more critical role than semantic content. Motivated by this insight, we propose decoding-aligned KV cache compression via position-aware pseudo queries (DapQ), a novel and lightweight eviction framework that leverages position-aware pseudo queries to simulate the output tokens, thereby establishing an effective observation window for importance assessment. It aligns closely with the actual generation context and enables precise token eviction. Extensive evaluations across multiple benchmarks and LLMs demonstrate that DapQ achieves superior performance, particularly under strict memory constraints (e.g., up to nearly lossless performance 99.5% on NIAH with 3% KV cache budgets).