AI Navigate

LightMoE: Reducing Mixture-of-Experts Redundancy through Expert Replacing

arXiv cs.AI / 3/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes expert replacing, a paradigm that substitutes redundant MoE experts with parameter-efficient modules to reduce memory usage while preserving capabilities.
  • LightMoE advances the idea with adaptive expert selection, hierarchical expert construction, and an annealed recovery strategy to minimize additional training cost.
  • Empirical results show LightMoE matches LoRA fine-tuning at 30% compression and, at 50% compression, outperforms existing methods with an average improvement of 5.6% across five tasks.
  • Overall, LightMoE demonstrates a favorable trade-off among memory efficiency, training efficiency, and performance for MoE-based large language models.

Abstract

Mixture-of-Experts (MoE) based Large Language Models (LLMs) have demonstrated impressive performance and computational efficiency. However, their deployment is often constrained by substantial memory demands, primarily due to the need to load numerous expert modules. While existing expert compression techniques like pruning or merging attempt to mitigate this, they often suffer from irreversible knowledge loss or high training overhead. In this paper, we propose a novel expert compression paradigm termed expert replacing, which replaces redundant experts with parameter-efficient modules and recovers their capabilities with low training costs. We find that even a straightforward baseline of this paradigm yields promising performance. Building on this foundation, we introduce LightMoE, a framework that enhances the paradigm by introducing adaptive expert selection, hierarchical expert construction, and an annealed recovery strategy. Experimental results show that LightMoE matches the performance of LoRA fine-tuning at a 30% compression ratio. Even under a more aggressive 50% compression rate, it outperforms existing methods and achieves average performance improvements of 5.6% across five diverse tasks. These findings demonstrate that LightMoE strikes a superior balance among memory efficiency, training efficiency, and model performance.