AdaSplash-2: Faster Differentiable Sparse Attention

arXiv cs.LG / 4/17/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • Sparse attention is used to reduce the quadratic compute cost of transformers in long-context training, but differentiable variants like α-entmax attention have lagged due to the overhead of computing the normalizer τ.
  • AdaSplash-2 introduces a histogram-based on-the-fly initialization that makes τ computation require only about 1–2 iterations in typical cases, speeding up both forward and backward passes.
  • By building and storing a coarse histogram of attention scores in on-chip SRAM and using a sparsity-aware GPU kernel that skips low-contribution zero blocks, AdaSplash-2 improves training efficiency.
  • Experiments indicate AdaSplash-2 matches or improves per-step training time compared with FlashAttention-2 when block sparsity is moderate-to-high (e.g., >60%), which is common in long-context settings.
  • On downstream tasks, models using efficient α-entmax attention perform like softmax baselines at short context lengths and deliver substantial gains at long context lengths.

Abstract

Sparse attention has been proposed as a way to alleviate the quadratic cost of transformers, a central bottleneck in long-context training. A promising line of work is \alpha-entmax attention, a differentiable sparse alternative to softmax that enables input-dependent sparsity yet has lagged behind softmax due to the computational overhead necessary to compute the normalizer \tau. In this paper, we introduce AdaSplash-2, which addresses this limitation through a novel histogram-based initialization that reduces the number of iterations needed to compute \tau to typically 1--2. The key idea is to compute a coarse histogram of attention scores on the fly and store it in on-chip SRAM, yielding a more accurate initialization that enables fast forward and backward computation. Combined with a sparsity-aware GPU implementation that skips zero blocks with low overhead, AdaSplash-2 matches or improves per-step training time relative to FlashAttention-2 when block sparsity is moderate-to-high (e.g., >60\%), which often occurs at long-context lengths. On downstream tasks, models trained with our efficient \alpha-entmax attention match softmax baselines at short-context lengths and achieve substantial gains in long-context settings.