EquiformerV3: Scaling Efficient, Expressive, and General SE(3)-Equivariant Graph Attention Transformers

arXiv cs.LG / 4/13/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces EquiformerV3, a third-generation SE(3)-equivariant graph attention Transformer aimed at improving efficiency, expressivity, and generality for 3D atomistic modeling.
  • It reports an implementation optimization that yields a 1.75× speedup over EquiformerV2 while retaining SE(3) equivariance properties.
  • The authors add architectural refinements to EquiformerV2, including equivariant merged layer normalization, tuned feedforward hyper-parameters, and attention using a smooth radius cutoff.
  • To better capture many-body interactions, EquiformerV3 uses SwiGLU–S² activations, which are designed to enhance theoretical expressivity while keeping strict equivariance and reducing the complexity of S² grid sampling.
  • With SwiGLU–S² activations and smooth-cutoff attention, the model is positioned to accurately learn smoothly varying potential energy surfaces and support energy-conserving simulations and higher-order PES derivatives, achieving state-of-the-art results on OC20, OMat24, and Matbench Discovery using the DeNS auxiliary task.

Abstract

As SE(3)-equivariant graph neural networks mature as a core tool for 3D atomistic modeling, improving their efficiency, expressivity, and physical consistency has become a central challenge for large-scale applications. In this work, we introduce EquiformerV3, the third generation of the SE(3)-equivariant graph attention Transformer, designed to advance all three dimensions: efficiency, expressivity, and generality. Building on EquiformerV2, we have the following three key advances. First, we optimize the software implementation, achieving 1.75\times speedup. Second, we introduce simple and effective modifications to EquiformerV2, including equivariant merged layer normalization, improved feedforward network hyper-parameters, and attention with smooth radius cutoff. Third, we propose SwiGLU-S^2 activations to incorporate many-body interactions for better theoretical expressivity and to preserve strict equivariance while reducing the complexity of sampling S^2 grids. Together, SwiGLU-S^2 activations and smooth-cutoff attention enable accurate modeling of smoothly varying potential energy surfaces (PES), generalizing EquiformerV3 to tasks requiring energy-conserving simulations and higher-order derivatives of PES. With these improvements, EquiformerV3 trained with the auxiliary task of denoising non-equilibrium structures (DeNS) achieves state-of-the-art results on OC20, OMat24, and Matbench Discovery.