Saccade Attention Networks: Using Transfer Learning of Attention to Reduce Network Sizes

arXiv cs.CV / 4/21/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes “Saccade Attention Networks,” which learn where to attend so that only the most relevant features are processed instead of the full sequence.
  • It leverages transfer learning from a large pre-trained model to train a network that performs attention-guided image pre-processing.
  • By reducing the input sequence length to a sparse set of attended key features, the approach mitigates the quadratic compute cost of transformer attention.
  • Experiments report nearly 80% fewer calculations while achieving similar downstream performance compared with standard full-attention processing.

Abstract

One of the limitations of transformer networks is the sequence length due to the quadratic nature of the attention matrix. Classical self attention uses the entire sequence length, however, the actual attention being used is sparse. Humans use a form of sparse attention when analyzing an image or scene called saccades. Focusing on key features greatly reduces computation time. By using a network (Saccade Attention Network) to learn where to attend from a large pre-trained model, we can use it to pre-process images and greatly reduce network size by reducing the input sequence length to just the key features being attended to. Our results indicate that you can reduce calculations by close to 80% and produce similar results.