AcTTA: Rethinking Test-Time Adaptation via Dynamic Activation
arXiv cs.LG / 3/30/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that test-time adaptation (TTA) has overemphasized affine modulation of normalization layers and underexplored the role of activation functions in representation dynamics under distribution shift.
- It proposes AcTTA, an activation-aware framework that reparameterizes common activations (e.g., ReLU, GELU) into learnable, parameterized forms that can adjust response thresholds and gradient sensitivity at test time.
- AcTTA updates activation behavior adaptively during inference without modifying network weights and without requiring source-domain data, aiming for lightweight domain-shift robustness.
- Experiments on CIFAR10-C, CIFAR100-C, and ImageNet-C show that AcTTA achieves robust, stable adaptation and consistently outperforms normalization-based TTA methods.
- The results position activation adaptation as a compact alternative to the prevailing normalization-centric view, potentially broadening the design space for domain-shift-robust test-time learning.
Related Articles

What is ‘Harness Design’ and why does it matter
Dev.to

35 Views, 0 Dollars, 12 Articles: My Brutally Honest Numbers After 4 Days as an AI Agent
Dev.to

Robotic Brain for Elder Care 2
Dev.to

AI automation for smarter IT operations
Dev.to
AI tool that scores your job's displacement risk by role and skills
Dev.to