Do Domain-specific Experts exist in MoE-based LLMs?

arXiv cs.CL / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper asks whether domain-specific expert behaviors actually emerge inside Mixture of Experts (MoE) LLMs, and tests this across 10 advanced MoE models from 3.8B to 120B parameters.
  • The authors provide empirical evidence that domain-specific experts do exist in MoE-based LLMs, addressing an open question about specialization and interpretability.
  • They introduce Domain Steering Mixture of Experts (DSMoE), a training-free approach intended to steer domain behavior without adding inference-time compute.
  • Experiments show DSMoE outperforms both well-trained MoE LLMs and strong baselines such as Supervised Fine-Tuning (SFT) in both target and non-target domains.
  • The method is reported to improve performance and robust generalization while maintaining the same inference cost, and the implementation is released publicly on GitHub.

Abstract

In the era of Large Language Models (LLMs), the Mixture of Experts (MoE) architecture has emerged as an effective approach for training extremely large models with improved computational efficiency. This success builds upon extensive prior research aimed at enhancing expert specialization in MoE-based LLMs. However, the nature of such specializations and how they can be systematically interpreted remain open research challenges. In this work, we investigate this gap by posing a fundamental question: \textit{Do domain-specific experts exist in MoE-based LLMs?} To answer the question, we evaluate ten advanced MoE-based LLMs ranging from 3.8B to 120B parameters and provide empirical evidence for the existence of domain-specific experts. Building on this finding, we propose \textbf{Domain Steering Mixture of Experts (DSMoE)}, a training-free framework that introduces zero additional inference cost and outperforms both well-trained MoE-based LLMs and strong baselines, including Supervised Fine-Tuning (SFT). Experiments on four advanced open-source MoE-based LLMs across both target and non-target domains demonstrate that our method achieves strong performance and robust generalization without increasing inference cost or requiring additional retraining. Our implementation is publicly available at https://github.com/giangdip2410/Domain-specific-Experts.