DASH-KV: Accelerating Long-Context LLM Inference via Asymmetric KV Cache Hashing

arXiv cs.CL / 4/22/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper proposes DASH-KV to speed up long-context LLM inference by reducing the quadratic cost of standard attention.
  • It reformulates attention as approximate nearest-neighbor search using asymmetric deep hashing, with an encoding design that treats queries and keys differently.
  • A dynamic mixed-precision mechanism selectively preserves full-precision computation for critical tokens to balance efficiency and output quality.
  • Experiments on LongBench show DASH-KV achieves results that substantially outperform prior KV-cache compression/baseline approaches and can match full attention quality while reducing complexity from O(N^2) to O(N).
  • The authors provide an implementation at the linked GitHub repository to support replication and adoption.

Abstract

The quadratic computational complexity of the standard attention mechanism constitutes a fundamental bottleneck for large language models in long-context inference. While existing KV cache compression methods alleviate memory pressure, they often sacrifice generation quality and fail to address the high overhead of floating-point arithmetic. This paper introduces DASH-KV, an innovative acceleration framework that reformulates attention as approximate nearest-neighbor search via asymmetric deep hashing. Under this paradigm, we design an asymmetric encoding architecture that differentially maps queries and keys to account for their distinctions in precision and reuse characteristics. To balance efficiency and accuracy, we further introduce a dynamic mixed-precision mechanism that adaptively retains full-precision computation for critical tokens. Extensive experiments on LongBench demonstrate that DASH-KV significantly outperforms state-of-the-art baseline methods while matching the performance of full attention, all while reducing inference complexity from O(N^2) to linear O(N). The code is available at https://github.com/Zhihan-Zh/DASH-KV