Interesting pattern: despite wildly different total sizes, many recent MoE models land around 10B active params. Qwen 3.5 122B activates 10B. MiniMax M2.7 runs 230B total with 10B active via Top 2 routing.
Training cost scales as C ≈ 6 × N_active × T. At 10B active and 15T tokens, you get ~9e23 FLOPs, roughly 1/7th of a dense 70B on equivalent data. The economics practically force this convergence.
Has anyone measured real inference memory scaling when expert count increases but active params stay fixed? KV cache seems to dominate past 32k context regardless.
[link] [comments]



