ProxyAttn: Guided Sparse Attention via Representative Heads

arXiv cs.CL / 4/1/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper addresses the quadratic compute cost of attention in LLMs on long-text tasks by targeting more efficient sparse attention.
  • It introduces ProxyAttn, a training-free method that improves block importance estimation by exploiting similarity across attention heads and using pooled representative heads as proxies.
  • ProxyAttn also adds a block-aware dynamic budget estimation to handle differing sparsity needs among heads, aiming for finer-grained sparsity decisions at low overhead.
  • Experiments across multiple mainstream models and benchmarks report up to 10.3× attention acceleration and 2.4× prefilling acceleration with minimal performance loss compared with existing approaches.
  • The authors release code publicly, positioning ProxyAttn as an immediately usable research contribution for accelerating long-context LLM workloads.

Abstract

The quadratic complexity of attention mechanisms limits the efficiency of Large Language Models (LLMs) on long-text tasks. Recently, methods that dynamically estimate block importance have enabled efficient block sparse attention, leading to significant acceleration in long-text pre-filling of LLMs. However, their coarse-grained estimation inevitably leads to performance degradation at high sparsity rates. In this work, we propose ProxyAttn, a training-free sparse attention algorithm that achieves more precise block estimation by compressing the dimension of attention heads. Based on our observation of the similarity among multiple attention heads, we use the scores of pooled representative heads to approximate the scores for all heads. To account for the varying sparsity among heads, we also propose a block-aware dynamic budget estimation method. By combining the scores from representative proxy heads with multi-head dynamic budgets, we achieve a more fine-grained block importance evaluation at low computational cost. Experiments on a variety of mainstream models and extensive benchmarks confirm the underlying similarity among attention heads. Leveraging a fine-grained estimation, the proposed method achieves substantial gains in performance and efficiency compared to existing methods. More precisely, ProxyAttn can achieve up to 10.3x attention acceleration and 2.4x prefilling acceleration without significant performance loss. Our code is available at https://github.com/wyxstriker/ProxyAttn.