Parameter-Efficient Architectural Modifications for Translation-Invariant CNNs

arXiv cs.CV / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that standard CNNs are not truly translation-invariant because spatially dependent fully connected layers make them vulnerable to even single-pixel shifts.
  • It proposes a lightweight “Online Architecture” method that inserts Global Average Pooling (GAP) layers at multiple depths to decouple recognition from spatial location.
  • In a VGG-16 case study, the modification cuts trainable parameters by 98% (5.2M → 82K) and total network size by 90% (138M → 14M) while maintaining competitive ImageNet Top-1 accuracy (66.4%).
  • The approach improves translational robustness by reducing average relative loss (0.09 → 0.05), though the paper notes a remaining limitation from periodic aliasing introduced by discrete pooling.
  • The authors extend the invariant CNNs to perceptual image quality assessment (LPIPS), showing stronger generalization (KADID-10k Spearman 0.89 vs. 0.75) and better alignment with human responses (RAID Spearman 0.95) than a retrained baseline.

Abstract

Convolutional Neural Networks (CNNs) are widely assumed to be translation-invariant, yet standard architectures exhibit a startling fragility: even a single-pixel shift can drastically degrade performance due to their reliance on spatially dependent fully connected layers. In this work, we resolve this vulnerability by proposing a lightweight 'Online Architecture' strategy. By strategically inserting Global Average Pooling (GAP) layers at various network depths, we effectively decouple feature recognition from spatial location. Using VGG-16 as a primary case study, we demonstrate that this architectural modification achieves a massive 98% reduction in trainable parameters (from 5.2M to just 82K) and a 90% reduction in total network size (138M to 14M). Despite this drastic pruning, our variants maintain competitive Top-1 accuracy on ImageNet (66.4%) while doubling translational robustness, reducing average relative loss from 0.09 to 0.05. Furthermore, our analysis identifies a fundamental limit to invariance: while GAP resolves macroscopic sensitivity, discrete pooling operations introduce a residual periodic aliasing that prevents perfect pixel-level stability. Finally, we extend these findings to Perceptual Image Quality Assessment (IQA) by integrating our invariant backbones into the LPIPS framework. The resulting metric significantly outperforms the retrained baseline in generalization across the KADID-10k dataset (Spearman 0.89 vs. 0.75) and achieves a near-perfect alignment with human psychophysical response curves on the RAID dataset (Spearman 0.95). These results confirm that enforcing architectural invariance is a far more efficient and biologically plausible path to robustness than traditional data augmentation. Data and code are publicly available. The data and code are publicly available to facilitate validation and further research.

Parameter-Efficient Architectural Modifications for Translation-Invariant CNNs | AI Navigate