Adaptive Head Budgeting for Efficient Multi-Head Attention

arXiv cs.LG / 4/27/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that standard multi-head attention uses all heads uniformly for every input, wasting computation and sometimes hurting performance when fewer heads would suffice.
  • It introduces BudgetFormer, a Transformer variant that adaptively allocates attention-head resources per input by learning both a “head budget” and a relevance distribution over heads.
  • The method includes a training strategy that balances exploration and exploitation to discover effective head configurations before settling into efficient usage.
  • Experiments on text classification with different complexities show lower inference cost (FLOPs and memory) and results that can match or surpass full multi-head attention quality.
  • The authors conclude that adaptive head allocation is a principled way to improve both efficiency and effectiveness in Transformer models.

Abstract

Transformers have become the dominant architecture across a wide range of domains, largely due to the effectiveness of multi-head attention in capturing diverse representation subspaces. However, standard multi-head attention activates all heads uniformly for every input, regardless of task requirements or input complexity. In many scenarios, particularly for coarse-grained tasks such as text classification, the relevant information is often global and does not require the full diversity of attention heads. As a consequence, using a fixed number of heads can introduce unnecessary computational cost or lead to suboptimal performance when the allocation does not match the input. To address this limitation, we introduce BudgetFormer, a Transformer architecture equipped with an adaptive multi-head attention mechanism that dynamically allocates computational resources. Our approach learns, for each input, both a head budget corresponding to the number of attention heads required, and a relevance distribution that selects the most informative heads. We also propose a training strategy based on an exploration and exploitation trade-off, allowing the model to discover effective head configurations before converging to efficient usage patterns. Experiments on text classification tasks of varying complexity show that our method reduces inference cost in terms of FLOPs and memory, while also achieving performance that can surpass standard full multi-head attention. These results highlight the potential of adaptive head allocation as a principled approach to improving both efficiency and effectiveness in Transformer models.