Salca: A Sparsity-Aware Hardware Accelerator for Efficient Long-Context Attention Decoding

arXiv cs.AI / 4/29/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper addresses a key bottleneck in long-context LLM inference: during decoding, attention repeatedly accesses large KV caches, causing bandwidth and compute pressure that grows with sequence length.
  • It proposes a hardware-software co-design approach centered on dual-compression dynamic sparse attention, combining ultra-low-precision quantization with feature sparsity to reduce overhead.
  • To further cut complexity, it uses a hardware-friendly approximate Top-K selection that reduces the filtering cost from O(n log k) to O(n).
  • The custom accelerator design includes deep optimizations of compute and memory access with a performance model, using a fully pipelined parallel architecture that maintains O(n) efficiency for long sequences.
  • Experiments report 3.82× speedup and 74.19× energy efficiency versus A100, and claim the first ASIC-style accelerator for efficient long-context inference with 3.5×+ throughput and 2.08×+ better energy efficiency than prior state-of-the-art accelerators.

Abstract

Long contexts improve capabilities of large language models but pose serious hardware challenges: compute and memory footprints grow linearly with sequence length. Particularly, the decoding phase continuously accesses massive KV cache, dramatically increasing bandwidth and computing pressure. Existing accelerators are primarily designed and evaluated for short contexts. They suffer from significant performance degradation when processing long contexts. To bridge this gap, we identify the major bottleneck and present a hardware accelerator for long context attention decoding via hardware-software co-design. On the software side, we propose dual-compression dynamic sparse attention. It combines ultra-low-precision quantization with feature sparsity to minimize prediction overhead. A hardware-friendly approximate Top-K selection further reduces filter complexity from O(n \log k) to O(n). On the hardware side, we deeply optimize compute and memory access to tackle bottlenecks from intricate interplay between sparse attention and long contexts, and establish a performance model to derive the optimal co-design scheme. The resulting hardware adopts a fully pipelined parallel architecture and achieves O(n) efficiency even for long sequences. Experiments show that our design delivers 3.82\times speedup and 74.19\times energy efficiency over A100. Compared to SOTA accelerators, this is the first ASIC accelerator that efficiently supports long context inference, with at least 3.5\times higher throughput and 2.08\times better energy efficiency.