AI Navigate

Path-Constrained Mixture-of-Experts

arXiv cs.LG / 3/20/2026

📰 NewsModels & Research

Key Points

  • PathMoE shares router parameters across consecutive layers to reduce the combinatorial path space in sparse MoE architectures, addressing statistical inefficiency from independent routing.
  • The method achieves consistent improvements in perplexity and downstream tasks on 0.9B and 16B parameter models, without requiring auxiliary load-balancing losses.
  • Analysis shows tokens following the same path cluster by linguistic function, with PathMoE producing more concentrated groups, better cross-layer consistency, and greater robustness to routing perturbations.
  • The work reframes MoE architectures around the concept of expert paths, offering new insights into design and analysis.

Abstract

Sparse Mixture-of-Experts (MoE) architectures enable efficient scaling by activating only a subset of parameters for each input. However, conventional MoE routing selects each layer's experts independently, creating N^L possible expert paths -- for N experts across L layers. This far exceeds typical training set sizes, leading to statistical inefficiency as the model may not learn meaningful structure over such a vast path space. To constrain it, we propose \pathmoe, which shares router parameters across consecutive layers. Experiments on 0.9B and 16B parameter models demonstrate consistent improvements on perplexity and downstream tasks over independent routing, while eliminating the need for auxiliary load balancing losses. Analysis reveals that tokens following the same path naturally cluster by linguistic function, with \pathmoe{} producing more concentrated groups, better cross-layer consistency, and greater robustness to routing perturbations. These results offer a new perspective for understanding MoE architectures through the lens of expert paths.