Mixture of Heterogeneous Grouped Experts for Language Modeling

arXiv cs.CL / 4/28/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces Mixture of Heterogeneous Grouped Experts (MoHGE) as a practical heterogeneous MoE design to better match compute costs with token-level complexity.
  • It uses a two-level routing mechanism to flexibly select expert combinations in a resource-aware way, improving how tokens are routed to expert groups.
  • To increase inference efficiency, the authors propose a Group-Wise Auxiliary Loss that steers tokens toward parameter-efficient expert groups based on task difficulty.
  • For real-world deployment, the paper addresses GPU load balancing with an All-size Group-decoupling Allocation strategy plus an Intra-Group Experts Auxiliary Loss to keep computation evenly distributed across GPUs.
  • Experiments show MoHGE achieves performance comparable to standard MoE while cutting total parameters by about 20% and maintaining balanced GPU utilization, and the code is released publicly.

Abstract

Large Language Models (LLMs) based on Mixture-of-Experts (MoE) are pivotal in industrial applications for their ability to scale performance efficiently. However, standard MoEs enforce uniform expert sizes,creating a rigidity that fails to align computational costs with varying token-level complexity. While heterogeneous expert architectures attempt to address this by diversifying expert sizes, they often suffer from significant system-level challenges, specifically unbalanced GPU utilization and inefficient parameter utilization, which hinder practical deployment. To bridge the gap between theoretical heterogeneity and robust industrial application, we propose Mixture of Heterogeneous Grouped Experts (MoHGE) which introduces a two-level routing mechanism to enable flexible, resource-aware expert combinations. To optimize inference efficiency, we propose a Group-Wise Auxiliary Loss, which dynamically steers tokens to the most parameter-efficient expert groups based on task difficulty. To address the critical deployment challenge of GPU load balancing, we introduce an All-size Group-decoupling Allocation strategy coupled with an Intra-Group Experts Auxiliary Loss. These mechanisms collectively ensure uniform computation distribution across GPUs. Extensive evaluations demonstrate that MoHGE matches the performance of MoE architectures while reducing the total parameters by approximately 20% and maintaining balanced GPU utilization. Our work establishes a scalable paradigm for resource-efficient MoE design, offering a practical solution for optimizing inference costs in real-world scenarios. The code is publicly available at https://github.com/UnicomAI/MoHGE.