Expert Upcycling: Shifting the Compute-Efficient Frontier of Mixture-of-Experts

arXiv cs.LG / 4/23/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the high training cost of large Mixture-of-Experts (MoE) models—where memory and inter-device communication scale with total parameters—while noting that MoE can improve quality by increasing expert count under fixed per-token compute.
  • It proposes “expert upcycling,” which expands an already-trained MoE by duplicating experts and extending the router during continued pre-training (CPT), while keeping top-K routing fixed to preserve per-token inference cost.
  • The method uses duplication to provide a warm-start initialization, allowing the enlarged model to begin CPT from a substantially lower loss than random initialization, then specialization emerges as CPT breaks symmetry among duplicated experts.
  • The authors provide theory that decomposes the quality improvement gap into an initialization term and a capacity term, and they introduce utility-based expert selection for non-uniform duplication using gradient-based importance scores.
  • Experiments on 7B–13B parameter MoE models show the upcycled model can match fixed-size baselines on validation loss while saving about 32% of GPU hours, with further gains when using limited CPT budgets and utility-based selection.

Abstract

Mixture-of-Experts (MoE) has become the dominant architecture for scaling large language models: frontier models routinely decouple total parameters from per-token computation through sparse expert routing. Scaling laws show that under fixed active computation, model quality scales predictably with total parameters, and MoEs realize this by increasing expert count. However, training large MoEs is expensive, as memory requirements and inter-device communication both scale with total parameter count. We propose expert upcycling, a method for progressively expanding MoE capacity by increasing the number of experts during continued pre-training (CPT). Given a trained E-expert model, the upcycling operator constructs an mE-expert model through expert duplication and router extension while holding top-K routing fixed, preserving per-token inference cost. Duplication provides a warm initialization: the expanded model inherits the source checkpoint's learned representations, starting from a substantially lower loss than random initialization. Subsequent CPT then breaks the symmetry among duplicated experts to drive specialization. We formalize the upcycling operator and develop a theoretical framework decomposing the quality gap into a capacity term and an initialization term. We further introduce utility-based expert selection, which uses gradient-based importance scores to guide non-uniform duplication, more than tripling gap closure when CPT is limited. In our 7B-13B total parameter experiments, the upcycled model matches the fixed-size baseline on validation loss while saving 32% of GPU hours. Comprehensive ablations across model scales, activation ratios, MoE architectures, and training budgets yield a practical recipe for deploying expert upcycling, establishing it as a principled, compute-efficient alternative to training large MoE models from scratch.