AI Navigate

Expert Threshold Routing for Autoregressive Language Modeling with Dynamic Computation Allocation and Load Balancing

arXiv cs.AI / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents Expert Threshold (ET) routing for token-choice Mixture-of-Experts in autoregressive language models, enabling dynamic computation allocation without auxiliary load-balancing losses.
  • Each expert maintains an exponential moving average (EMA) threshold derived from the global token distribution, and a token is routed to that expert if its score exceeds the threshold.
  • This routing is fully causal and token-level, requiring no dependence on other tokens in the batch, and works during both training and inference.
  • On pretraining experiments with a 2.4B-parameter model on FineWeb-Edu, ET achieves 0.067 lower cross-entropy than TC-MoE, equivalent to matching that performance with 1.6x fewer tokens.

Abstract

Token-choice Mixture-of-Experts (TC-MoE) routes each token to a fixed number of experts, limiting dynamic computation allocation and requiring auxiliary losses to maintain load balance. We propose Expert Threshold (ET) routing, where each expert maintains an exponential moving average (EMA) threshold estimated from the global token distribution. At both training and inference, each token is independently routed to an expert if its score exceeds the expert's threshold, enabling dynamic computation allocation while achieving load balance without auxiliary losses. This fully causal mechanism eliminates dependence on other tokens in the batch, making it well-suited for autoregressive language modeling. In pretraining experiments scaling to 2.4B parameters on FineWeb-Edu, ET achieves 0.067 lower cross-entropy loss than TC-MoE, equivalent to reaching the same performance with 1.6\times fewer tokens.