AI Navigate

PolyGLU: State-Conditional Activation Routing in Transformer Feed-Forward Networks

arXiv cs.LG / 3/17/2026

📰 NewsModels & Research

Key Points

  • PolyGLU is a drop-in replacement for SwiGLU that lets each FFN neuron dynamically route among four activation functions via a differentiable mechanism combining learned static preferences with input-conditioned gating, trained end-to-end with Gumbel-Softmax.
  • The authors train PolychromaticLM, a 597M-parameter transformer, on ~10B tokens on a single NVIDIA A100, with only ~0.23% parameter overhead (about 1.4M parameters).
  • The routing exhibits emergent near-deterministic activation selections and depth-dependent specialization (early layers prefer GELU, deeper layers prefer Tanh) while three layers retain elevated routing entropy and the mechanism remains stable under supervised fine-tuning (entropy stays near ln(4) through 13,067 SFT steps).
  • On standard benchmarks, PolychromaticLM achieves 62-89% of Qwen3-0.6B-Base performance despite training on 3,600x fewer tokens, and all code, weights, and training infrastructure are released under Apache 2.0.

Abstract

Biological neural systems employ diverse neurotransmitters -- glutamate, GABA, dopamine, acetylcholine -- to implement distinct signal-processing modalities within shared neural circuits. In contrast, modern transformers apply a single fixed activation function across all feed-forward neurons. We introduce PolyGLU (Polychromatic Gated Linear Unit), a drop-in replacement for SwiGLU that enables each FFN neuron to dynamically route among K=4 activation functions via a differentiable mechanism combining learned static preferences with input-conditioned gating, trained end-to-end with Gumbel-Softmax. We train PolychromaticLM, a 597M-parameter transformer, on ~10B tokens using a single NVIDIA A100 GPU. Our key finding is emergent routing behavior: without any explicit sparsity loss or entropy regularization, the routing mechanism converges to near-deterministic activation selections (mean dynamic entropy = 0.030% of maximum), with a striking depth-dependent specialization pattern -- early layers prefer GELU while deep layers strongly favor Tanh. Three layers maintain elevated routing entropy, suggesting computational flexibility points. The routing architecture adds only 0.23% parameter overhead (~1.4M parameters) and proves fully robust to supervised fine-tuning: routing entropy remains constant at ln(4) throughout 13,067 SFT steps. On standard benchmarks, PolychromaticLM achieves 62-89% of Qwen3-0.6B-Base performance despite training on 3,600x fewer tokens. All code, weights, and training infrastructure are released under Apache 2.0.