Switch Attention: Towards Dynamic and Fine-grained Hybrid Transformers

arXiv cs.CL / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Switch Attention (SwiAttn), a hybrid transformer that dynamically routes each token at each layer between full attention (global context) and sliding-window attention (efficient local context) to address long-context bottlenecks.
  • Unlike prior hybrid approaches that rely on static, heuristic alternating patterns, SwiAttn uses fine-grained, per-token routing to allocate computation more efficiently across different scenarios.
  • An adaptive regularization objective is proposed to encourage the model to favor efficiency, balancing accuracy with reduced compute.
  • The authors use continual pretraining to transfer a full-attention architecture into the hybrid form and evaluate on 23 benchmark datasets for both 4K and 32K context lengths, reporting improved effectiveness.

Abstract

The attention mechanism has been the core component in modern transformer architectures. However, the computation of standard full attention scales quadratically with the sequence length, serving as a major bottleneck in long-context language modeling. Sliding window attention restricts the context length for better efficiency at the cost of narrower receptive fields. While existing efforts attempt to take the benefits from both sides by building hybrid models, they often resort to static, heuristically designed alternating patterns that limit efficient allocation of computation in various scenarios. In this paper, we propose Switch Attention (SwiAttn), a novel hybrid transformer that enables dynamic and fine-grained routing between full attention and sliding window attention. For each token at each transformer layer, SwiAttn dynamically routes the computation to either a full-attention branch for global information aggregation or a sliding-window branch for efficient local pattern matching. An adaptive regularization objective is designed to encourage the model towards efficiency. Moreover, we adopt continual pretraining to optimize the model, transferring the full attention architecture to the hybrid one. Extensive experiments are conducted on twenty-three benchmark datasets across both regular (4K) and long (32K) context lengths, demonstrating the effectiveness of the proposed method.