Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection

arXiv cs.CL / 5/4/2026

💬 OpinionDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper targets the quadratic cost of self-attention as the main bottleneck in long-context inference for large language models.
  • It introduces Token Sparse Attention, a dynamic, per-head token-level sparsification method that selects a reduced token set during attention and then decompresses outputs back to the full sequence.
  • Unlike prior methods that use fixed structured sparsity or permanently evict tokens, the approach allows token information to be reconsidered in later layers, avoiding irreversible early pruning.
  • The method is designed to be lightweight and compatible with dense attention implementations such as Flash Attention, and can be composed with existing sparse attention kernels.
  • Experiments report improved accuracy–latency trade-offs, including up to 3.23× attention speedups at 128K context with under 1% accuracy degradation.

Abstract

The quadratic complexity of attention remains the central bottleneck in long-context inference for large language models. Prior acceleration methods either sparsify the attention map with structured patterns or permanently evict tokens at specific layers, which can retain irrelevant tokens or rely on irreversible early decisions despite the layer-/head-wise dynamics of token importance. In this paper, we propose Token Sparse Attention, a lightweight and dynamic token-level sparsification mechanism that compresses per-head Q, K, V to a reduced token set during attention and then decompresses the output back to the original sequence, enabling token information to be reconsidered in subsequent layers. Furthermore, Token Sparse Attention exposes a new design point at the intersection of token selection and sparse attention. Our approach is fully compatible with dense attention implementations, including Flash Attention, and can be seamlessly composed with existing sparse attention kernels. Experimental results show that Token Sparse Attention consistently improves accuracy-latency trade-off, achieving up to \times3.23 attention speedup at 128K context with less than 1% accuracy degradation. These results demonstrate that dynamic and interleaved token-level sparsification is a complementary and effective strategy for scalable long-context inference.