AI Navigate

Beyond Muon: MUD (MomentUm Decorrelation) for Faster Transformer Training

arXiv cs.LG / 3/19/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • MUD (MomentUm Decorrelation) is introduced as a complementary whitening approach to Muon, replacing polar updates with a triangular whitening surrogate inspired by Gram-Schmidt and Gauss-Seidel.
  • The paper proves that row-orthonormal matrices are fixed points of the MUD map, relates the inner step to symmetric Gauss-Seidel preconditioning of the Gram matrix, and establishes quadratic local convergence near the fixed point.
  • Empirical results show 10-50% wall-clock improvements in time-to-perplexity over tuned AdamW and Muon, with higher peak tokens/s (1.3-2.6x) and up to 3x on GPT-2 large on an A100.
  • The method is demonstrated on training an ESM-2 150M protein language model, where MUD matches Muon-level validation perplexity in significantly less wall-clock time.

Abstract

Orthogonalized-momentum optimizers such as Muon improve transformer training by approximately whitening/orthogonalizing matrix-valued momentum updates via a short polar-decomposition iteration. However, polar-factor approximations typically require multiple large matrix multiplications, and the resulting overhead can be substantial and hardware-dependent. We introduce MUD (MomentUm Decorrelation), a complementary whitening approach that replaces Muon's polar update with a triangular (Cholesky-like) whitening surrogate inspired by classical Gram--Schmidt and Gauss-Seidel ideas. We show that row-orthonormal matrices are fixed points of the MUD map, relate the inner step to symmetric Gauss-Seidel preconditioning of the Gram matrix, and prove quadratic local convergence near the fixed point. In terms of time-to-perplexity, MUD yields consistent 10-50\% wall-clock improvements over tuned AdamW and Muon in time-to-perplexity, typically converging slightly slower per step than Muon but with substantially lower optimizer overhead -- relative to Muon, MUD improves peak tokens/s by roughly 1.3-2.6\times across most settings and up to nearly 3\times on GPT-2 large on an A100. We also demonstrate training a ESM-2 150M protein language model, where MUD matches Muon-level validation perplexity in significantly less wall-clock time.