MoEITS: A Green AI approach for simplifying MoE-LLMs

arXiv cs.LG / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes MoEITS, an algorithm aimed at simplifying Mixture-of-Experts (MoE) LLMs to reduce compute, memory footprint, and energy consumption.
  • MoEITS is built on standardized information-theoretic frameworks and is evaluated through both theoretical complexity analysis and practical experiments.
  • Experiments compare MoEITS against state-of-the-art MoE-LLM pruning methods on models including Mixtral 8×7B, Qwen1.5-2.7B, and DeepSeek-V2-Lite.
  • Results indicate MoEITS produces simplified models that maintain accuracy across benchmarks while achieving strong computational efficiency improvements.
  • The authors state that the implementation code will be released on GitHub, supporting reproducibility and adoption.

Abstract

Large language models are transforming all areas of academia and industry, attracting the attention of researchers, professionals, and the general public. In the trek for more powerful architectures, Mixture-of-Experts, inspired by ensemble models, have emerged as one of the most effective ways to follow. However, this implies a high computational burden for both training and inference. To reduce the impact on computing and memory footprint as well as the energy consumption, simplification methods has arisen as very effective procedures. In this paper, an original algorithm, MoEITS, for MoE-LLMs simplification is presented. The algorithm is characterized by a refined simplicity, underpinned by standardized Information Theoretic frameworks. MoEITS is analyzed in depth from theoretical and practical points of view. Its computational complexity is studied. Its performance on the accuracy of the simplified LLMs and the reduction rate achieved is assessed through a thoroughly designed experimentation. This empirical evaluation includes a comparison with state-of-the-art MoE-LLM pruning methods applied on Mixtral 8\times7B, Qwen1.5-2.7B, and DeepSeek-V2-Lite. The extensive experimentation conducted demonstrates that MoEITS outperforms state-of-the-art techniques by generating models that are both effective across all benchmarks and computationally efficient. The code implementing the method will be available at https://github.com/luisbalru/MoEITS.