AI Navigate

AIMER: Calibration-Free Task-Agnostic MoE Pruning

arXiv cs.LG / 3/20/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • AIMER introduces a calibration-free criterion for ranking experts in Mixture-of-Experts language models to enable pruning without calibration.
  • It defines AIMER (Absolute mean over root mean square Importance for Expert Ranking) to yield clear within-layer score separation and distinct expert stratification.
  • Across 7B to 30B MoE models and 25% and 50% pruning ratios, it delivers competitive or stronger performance versus calibration-based baselines on 16 benchmarks.
  • Scoring the experts requires only 0.22–1.27 seconds, enabling efficient deployment by reducing memory and serving overhead.

Abstract

Mixture-of-Experts (MoE) language models increase parameter capacity without proportional per-token compute, but the deployment still requires storing all experts, making expert pruning important for reducing memory and serving overhead. Existing task-agnostic expert pruning methods are typically calibration-dependent: they estimate expert importance from routing or activation statistics on a calibration set, which makes pruning outcomes sensitive to the choice of calibration set and adds substantial preprocessing cost. We introduce AIMER (\textbf{A}bsolute mean over root mean square \textbf{IM}portance for \textbf{E}xpert \textbf{R}anking), a simple calibration-free criterion that yields clear within-layer score separation and distinct expert stratification. Across 7B to 30B MoE language models at 25\% and 50\% pruning ratios over 16 benchmarks, AIMER consistently delivers competitive or stronger overall performance against state-of-the-art calibration-based expert pruning baselines with only 0.22--1.27 seconds for scoring the experts.