The Myth of Expert Specialization in MoEs: Why Routing Reflects Geometry, Not Necessarily Domain Expertise

arXiv cs.AI / 4/14/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that in Mixture of Experts (MoE) models, router linearity means expert usage similarity is explained by hidden-state similarity in representation space rather than by explicit domain expertise being encoded in routing architecture.
  • It provides proofs that hidden-state similarity is necessary and sufficient for similar expert selection, and validates the claim at both token and sequence levels across five pre-trained models.
  • The authors show that load-balancing loss suppresses shared hidden-state directions to maintain routing diversity, offering a theoretical account for “specialization collapse” under less diverse data conditions such as small batch sizes.
  • Although the mechanistic explanation is mathematically grounded, the resulting specialization patterns are difficult to interpret by humans, with observed expert overlap and routing correlations failing to track intuitive semantic or model-to-model relationships.
  • The work concludes that fully understanding expert specialization is as hard as understanding LLM hidden-state geometry, reinforcing the open nature of the problem beyond efficiency-focused MoE perspectives.

Abstract

Mixture of Experts (MoEs) are now ubiquitous in large language models, yet the mechanisms behind their "expert specialization" remain poorly understood. We show that, since MoE routers are linear maps, hidden state similarity is both necessary and sufficient to explain expert usage similarity, and specialization is therefore an emergent property of the representation space, not of the routing architecture itself. We confirm this at both token and sequence level across five pre-trained models. We additionally prove that load-balancing loss suppresses shared hidden state directions to maintain routing diversity, which might provide a theoretical explanation for specialization collapse under less diverse data, e.g. small batch. Despite this clean mechanistic account, we find that specialization patterns in pre-trained MoEs resist human interpretation: expert overlap between different models answering the same question is no higher than between entirely different questions (\sim60\%); prompt-level routing does not predict rollout-level routing; and deeper layers exhibit near-identical expert activation across semantically unrelated inputs, especially in reasoning models. We conclude that, while the efficiency perspective of MoEs is well understood, understanding expert specialization is at least as hard as understanding LLM hidden state geometry, a long-standing open problem in the literature.