SAW-INT4: System-Aware 4-Bit KV-Cache Quantization for Real-World LLM Serving

arXiv cs.LG / 4/22/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper argues that KV-cache memory is a key bottleneck for real-world LLM serving, especially when systems must handle both low-latency small batches and high-throughput concurrent requests.
  • It finds a small set of 4-bit KV-cache quantization techniques that remain practical under deployment constraints like paged memory layouts, regular memory access, and fused attention execution.
  • The main recommendation is token-wise INT4 quantization combined with block-diagonal Hadamard rotation, which achieves the best accuracy–efficiency trade-off across multiple models and benchmarks.
  • The authors implement a fused rotation-quantization kernel integrated into paged KV-cache layouts, reporting zero measurable end-to-end overhead and matching plain INT4 throughput at different concurrency levels.
  • Overall, the work frames KV-cache compression as a systems co-design problem, showing that lightweight Hadamard rotation can deliver near-lossless accuracy without harming serving efficiency.

Abstract

KV-cache memory is a major bottleneck in real-world LLM serving, where systems must simultaneously support latency-sensitive small-batch requests and high-throughput concurrent workloads. Although many KV-cache compression methods improve offline accuracy or compression ratio, they often violate practical serving constraints such as paged memory layouts, regular memory access, and fused attention execution, limiting their effectiveness in deployment. In this work, we identify the minimal set of 4-bit KV-cache quantization methods that remain viable under these constraints. Our central finding is that a simple design--token-wise INT4 quantization with block-diagonal Hadamard rotation--consistently achieves the best accuracy-efficiency trade-off. Across multiple models and benchmarks, this approach recovers nearly all of the accuracy lost by naive INT4, while more complex methods such as vector quantization and Hessian-aware quantization provide only marginal additional gains once serving compatibility is taken into account. To make this practical, we implement a fused rotation-quantization kernel that integrates directly into paged KV-cache layouts and introduces zero measurable end-to-end overhead, matching plain INT4 throughput across concurrency levels. Our results show that effective KV-cache compression is fundamentally a systems co-design problem: under real serving constraints, lightweight block-diagonal Hadamard rotation is a viable method that delivers near-lossless accuracy without sacrificing serving efficiency.