CSAttention: Centroid-Scoring Attention for Accelerating LLM Inference

arXiv cs.AI / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses long-context LLM inference bottlenecks caused by attention and KV-cache, especially when using reusable prefill prompts for agents and domain Q&A workloads.
  • It introduces Centroid-Scoring Attention (CSAttention), a training-free sparse attention approach that reduces per-token decoding cost by shifting work to a one-time offline prefill phase.
  • CSAttention builds fixed-size, query-centric lookup tables during offline prefill, allowing online decoding to use fast table lookups and GPU-friendly score accumulation instead of full-context scans.
  • Experiments on 32K–128K contexts show CSAttention achieves near-identical accuracy to full attention even at very high sparsity (up to 95%).
  • The method demonstrates up to 4.6× inference speedups versus the strongest baseline while outperforming other sparse attention techniques on both accuracy and latency.

Abstract

Long-context LLMs increasingly rely on extended, reusable prefill prompts for agents and domain Q&A, pushing attention and KV-cache to become the dominant decode-time bottlenecks. While sparse attention reduces computation and transfer costs, it often struggles to maintain accuracy at high sparsity levels due to the inherent distribution shift between Queries and Keys. We propose Centroid-Scoring Attention (CSAttention), a training-free sparse attention method optimized for high-throughput serving of reusable contexts. CSAttention adopts a storage-for-computation strategy tailored to the offline-prefill/online-decode setting: it front-loads computation into a one-time offline prefill phase that can be amortized across multiple queries, while aggressively optimizing per-step decoding latency. Specifically, CSAttention constructs query-centric lookup tables during offline prefill, whose size remains fixed during decoding, and enables online decoding to replace full-context scans with efficient table lookups and GPU-friendly score accumulation. Extensive experiments demonstrate that CSAttention achieves near-identical accuracy to full attention. Under high sparsity (95%) and long-context settings (32K-128K), CSAttention consistently outperforms state-of-the-art sparse attention methods in both model accuracy and inference speed, achieving up to 4.6x inference speedup over the most accurate baseline at a context length of 128K.