DynamicGate MLP Conditional Computation via Learned Structural Dropout and Input Dependent Gating for Functional Plasticity
arXiv cs.LG / 3/18/2026
📰 NewsModels & Research
Key Points
- The paper introduces DynamicGate-MLP, a framework that combines regularization-like dropout with input-dependent conditional computation via learned gates to adapt computation to each input.
- It defines continuous gate probabilities and, during inference, derives a discrete execution mask to select the active path, enabling sample-specific computation.
- Training uses a penalty on expected gate usage and a Straight-Through Estimator to optimize the discrete mask, balancing accuracy and compute budget.
- The method is evaluated on MNIST, CIFAR-10, Tiny-ImageNet, Speech Commands, and PBMC3k, comparing against MLP baselines and MoE-style variants, with compute efficiency measured via gate activation ratios and a layer-weighted MAC metric rather than wall-clock latency.
Related Articles

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA