Teacher-Guided Routing for Sparse Vision Mixture-of-Experts

arXiv cs.CV / 4/24/2026

📰 NewsModels & Research

Key Points

  • The paper targets a key optimization challenge in Sparse Mixture-of-Experts (MoE): the router only receives learning signals through the experts activated in the forward pass, which can cause gradient blocking and unstable routing.
  • It introduces TGR-MoE (Teacher-Guided Routing for Sparse Vision MoE) that builds a teacher router from a pretrained dense model’s intermediate representations and uses the teacher’s routing outputs as pseudo-supervision for the student router.
  • This teacher-guided pseudo-labeling reduces frequent expert-assignment fluctuations during training, leading to more stable router learning from the early stages.
  • Experiments on ImageNet-1K and CIFAR-100 show that TGR-MoE improves accuracy and routing consistency while preserving stable training behavior even with highly sparse expert activation settings.

Abstract

Recent progress in deep learning has been driven by increasingly large-scale models, but the resulting computational cost has become a critical bottleneck. Sparse Mixture of Experts (MoE) offers an effective solution by activating only a small subset of experts for each input, achieving high scalability without sacrificing inference speed. Although effective, sparse MoE training exhibits characteristic optimization difficulties. Because the router receives informative gradients only through the experts selected in the forward pass, it suffers from gradient blocking and obtains little information from unselected routes. This limited, highly localized feedback makes it difficult for the router to learn appropriate expert-selection scores and often leads to unstable routing dynamics, such as fluctuating expert assignments during training. To address this issue, we propose TGR-MoE: Teacher-Guided Routing for Sparse Vision Mixture-of-Experts, a simple yet effective method that stabilizes router learning using supervision derived from a pretrained dense teacher model. TGR-MoE constructs a teacher router from the teacher's intermediate representations and uses its routing outputs as pseudo-supervision for the student router, suppressing frequent routing fluctuations during training and enabling knowledge-guided expert selection from the early stages of training. Extensive experiments on ImageNet-1K and CIFAR-100 demonstrate that TGR consistently improves both accuracy and routing consistency, while maintaining stable training even under highly sparse configurations.