On Bayesian Softmax-Gated Mixture-of-Experts Models

arXiv stat.ML / 4/23/2026

📰 NewsModels & Research

Key Points

  • The paper studies Bayesian mixture-of-experts (MoE) models that use the common softmax-based gating mechanism, aiming to fill a gap in understanding their Bayesian theoretical properties.
  • It derives asymptotic results for three key tasks—density estimation, parameter estimation, and model selection—covering both fixed and randomly learned numbers of experts.
  • For density estimation, the authors establish posterior contraction rates in both settings (known fixed experts and a learnable random number of experts).
  • For parameter estimation, they provide convergence guarantees using tailored Voronoi-type losses designed to handle the MoE identifiability challenges.
  • For model selection, the paper proposes and analyzes two complementary strategies to choose the number of experts, offering theory-backed guidance for practical MoE design.

Abstract

Mixture-of-experts models provide a flexible framework for learning complex probabilistic input-output relationships by combining multiple expert models through an input-dependent gating mechanism. These models have become increasingly prominent in modern machine learning, yet their theoretical properties in the Bayesian framework remain largely unexplored. In this paper, we study Bayesian mixture-of-experts models, focusing on the ubiquitous softmax-based gating mechanism. Specifically, we investigate the asymptotic behavior of the posterior distribution for three fundamental statistical tasks: density estimation, parameter estimation, and model selection. First, we establish posterior contraction rates for density estimation, both in the regimes with a fixed, known number of experts and with a random learnable number of experts. We then analyze parameter estimation and derive convergence guarantees based on tailored Voronoi-type losses, which account for the complex identifiability structure of mixture-of-experts models. Finally, we propose and analyze two complementary strategies for selecting the number of experts. Taken together, these results provide one of the first systematic theoretical analyses of Bayesian mixture-of-experts models with softmax gating, and yield several theory-grounded insights for practical model design.