R\'enyi Attention Entropy for Patch Pruning

arXiv cs.CV / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a patch-pruning method for Transformers that uses entropy of attention distributions to decide which image patches to keep versus remove.
  • It argues that low-entropy patches (where attention is concentrated) are more informative and should be retained, while high-entropy patches (where attention is broadly spread) can be pruned as redundant.
  • It extends the pruning criterion from Shannon entropy to Rényi entropy to better capture sharp attention peaks and enable pruning policies that adapt to tasks and compute budgets.
  • Experiments on fine-grained image recognition show the approach reduces computation while maintaining accuracy, and further gains come from tuning the pruning policy using the Rényi-based measure.

Abstract

Transformers are strong baselines in both vision and language because self-attention captures long-range dependencies across tokens. However, the cost of self-attention grows quadratically with the number of tokens. Patch pruning mitigates this cost by estimating per-patch importance and removing redundant patches. To identify informative patches for pruning, we introduce a criterion based on the Shannon entropy of the attention distribution. Low-entropy patches, which receive selective and concentrated attention, are kept as important, while high-entropy patches with attention spread across many locations are treated as redundant. We also extend the criterion from Shannon to R\'enyi entropy, which emphasizes sharp attention peaks and supports pruning strategies that adapt to task needs and computational limits. In experiments on fine-grained image recognition, where patch selection is critical, our method reduced computation while preserving accuracy. Moreover, adjusting the pruning policy through the R\'enyi entropy measure yields further gains and improves the trade-off between accuracy and computation.