Benchmarking Optimizers for MLPs in Tabular Deep Learning

arXiv cs.LG / 4/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a gap in tabular deep learning by systematically benchmarking optimizers for training MLPs, rather than relying on AdamW as the default choice.
  • Under a shared experimental protocol across multiple tabular datasets and standard supervised learning, the Muon optimizer shows consistently better performance than AdamW.
  • The authors recommend considering Muon as a strong practical optimizer, assuming its additional training efficiency overhead is acceptable.
  • They also find that using an exponential moving average (EMA) of model weights can improve AdamW performance on vanilla MLPs, though the benefit is less consistent for other model variants.

Abstract

MLP is a heavily used backbone in modern deep learning (DL) architectures for supervised learning on tabular data, and AdamW is the go-to optimizer used to train tabular DL models. Unlike architecture design, however, the choice of optimizer for tabular DL has not been examined systematically, despite new optimizers showing promise in other domains. To fill this gap, we benchmark \Noptimizers optimizers on \Ndatasets tabular datasets for training MLP-based models in the standard supervised learning setting under a shared experiment protocol. Our main finding is that the Muon optimizer consistently outperforms AdamW, and thus should be considered a strong and practical choice for practitioners and researchers, if the associated training efficiency overhead is affordable. Additionally, we find exponential moving average of model weights to be a simple yet effective technique that improves AdamW on vanilla MLPs, though its effect is less consistent across model variants.