AI Navigate

SympFormer: Accelerated attention blocks via Inertial Dynamics on Density Manifolds

arXiv cs.LG / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • SympFormer introduces accelerated attention blocks derived from inertial Nesterov-type dynamics on density manifolds, where tokens carry both spatial and velocity variables to form Hamiltonian momentum attention blocks.
  • For linear self-attention, the blocks approximate a Stein variational gradient flow with a bilinear kernel, preserving elliptically contoured distributions.
  • The work provides implementable particle-based algorithms and demonstrates faster convergence than classical attention while preserving the same number of oracle calls.
  • By casting attention as a particle system on Wasserstein-2-type density spaces, the approach links physics-inspired dynamics to transformers and suggests efficiency and stability improvements for future models.

Abstract

Transformers owe much of their empirical success in natural language processing to the self-attention blocks. Recent perspectives interpret attention blocks as interacting particle systems, whose mean-field limits correspond to gradient flows of interaction energy functionals on probability density spaces equipped with Wasserstein-2-type metrics. We extend this viewpoint by introducing accelerated attention blocks derived from inertial Nesterov-type dynamics on density spaces. In our proposed architecture, tokens carry both spatial (feature) and velocity variables. The time discretization and the approximation of accelerated density dynamics yield Hamiltonian momentum attention blocks, which constitute the proposed accelerated attention architectures. In particular, for linear self-attention, we show that the attention blocks approximate a Stein variational gradient flow, using a bilinear kernel, of a potential energy. In this setting, we prove that elliptically contoured probability distributions are preserved by the accelerated attention blocks. We present implementable particle-based algorithms and demonstrate that the proposed accelerated attention blocks converge faster than the classical attention blocks while preserving the number of oracle calls.