VeloxNet: Efficient Spatial Gating for Lightweight Embedded Image Classification

arXiv cs.CV / 3/23/2026

📰 NewsModels & Research

Key Points

  • VeloxNet introduces gated multi-layer perceptron blocks with a spatial gating unit to replace SqueezeNet's fire modules, enabling global spatial modeling in embedded image classification with fewer parameters.
  • The model reduces parameter count by 46.1% compared with SqueezeNet (from 740,970 to 399,366) while improving weighted F1 scores on AIDER, CDD, and LDD by 6.32%, 30.83%, and 2.51%, respectively.
  • Evaluations against eleven baselines including MobileNet variants, ShuffleNet, EfficientNet, and recent vision transformers demonstrate VeloxNet's efficiency and accuracy gains in resource-constrained settings.
  • The authors plan to release the source code publicly upon acceptance, enabling reproducibility and practical adoption.

Abstract

Deploying deep learning models on embedded devices for tasks such as aerial disaster monitoring and infrastructure inspection requires architectures that balance accuracy with strict constraints on model size, memory, and latency. This paper introduces VeloxNet, a lightweight CNN architecture that replaces SqueezeNet's fire modules with gated multi-layer perceptron (gMLP) blocks for embedded image classification. Each gMLP block uses a spatial gating unit (SGU) that applies learned spatial projections and multiplicative gating, enabling the network to capture spatial dependencies across the full feature map in a single layer. Unlike fire modules, which are limited to local receptive fields defined by small convolutional kernels, the SGU provides global spatial modeling at each layer with fewer parameters. We evaluate VeloxNet on three aerial image datasets: the Aerial Image Database for Emergency Response (AIDER), the Comprehensive Disaster Dataset (CDD), and the Levee Defect Dataset (LDD), comparing against eleven baselines including MobileNet variants, ShuffleNet, EfficientNet, and recent vision transformers. VeloxNet reduces the parameter count by 46.1% relative to SqueezeNet (from 740,970 to 399,366) while improving weighted F1 scores by 6.32% on AIDER, 30.83% on CDD, and 2.51% on LDD. These results demonstrate that substituting local convolutional modules with spatial gating blocks can improve both classification accuracy and parameter efficiency for resource-constrained deployment. The source code will be made publicly available upon acceptance of the paper.

VeloxNet: Efficient Spatial Gating for Lightweight Embedded Image Classification | AI Navigate