QUEST: A robust attention formulation using query-modulated spherical attention

arXiv cs.AI / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes training instabilities in standard Transformer attention that arise from uncontrolled growth of query/key vector norms, potentially triggered by spurious patterns in data.
  • It introduces QUEST (Query-modulated Spherical Attention), which constrains keys to a hyperspherical latent space while letting each token modulate the attention sharpness.
  • QUEST is designed as a drop-in replacement for standard attention, aiming to improve stability without changing surrounding Transformer components.
  • Experiments on vision tasks (and additional domains) report that QUEST trains without instabilities and delivers better performance, including robustness to data corruption and adversarial attacks.

Abstract

The Transformer model architecture has become one of the most widely used in deep learning and the attention mechanism is at its core. The standard attention formulation uses a softmax operation applied to a scaled dot product between query and key vectors. We explore the role played by norms of the queries and keys, which can cause training instabilities when they arbitrarily increase. We demonstrate how this can happen even in simple Transformer models, in the presence of easy-to-learn spurious patterns in the data. We propose a new attention formulation, QUEry-modulated Spherical aTtention (QUEST), that constrains the keys to a hyperspherical latent space, while still allowing individual tokens to flexibly control the sharpness of the attention distribution. QUEST can be easily used as a drop-in replacement for standard attention. We focus on vision applications while also exploring other domains to highlight the method's generality. We show that (1) QUEST trains without instabilities and (2) produces models with improved performance (3) that are robust to data corruptions and adversarial attacks.