Learn&Drop: Fast Learning of CNNs based on Layer Dropping

arXiv cs.CV / 4/28/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces a training-time method for deep CNNs that scores each layer by how much its parameters are changing to decide whether the layer should keep learning.
  • Based on these scores, the network is dynamically scaled down so fewer parameters/operations are processed during training, accelerating forward propagation.
  • Unlike prior work that mainly targets inference-time compression or limits backpropagation costs, this approach specifically reduces forward-propagation computation during training.
  • Experiments on VGG and ResNet (tested with MNIST, CIFAR-10, and Imagenette) show training time can be reduced by more than half with little impact on accuracy, alongside sizable forward-FLOPs reductions.
  • The method is positioned as particularly beneficial for scenarios requiring fine-tuning or online training of convolutional models, such as when data arrive sequentially.

Abstract

This paper proposes a new method to improve the training efficiency of deep convolutional neural networks. During training, the method evaluates scores to measure how much each layer's parameters change and whether the layer will continue learning or not. Based on these scores, the network is scaled down such that the number of parameters to be learned is reduced, yielding a speed up in training. Unlike state-of-the-art methods that try to compress the network to be used in the inference phase or to limit the number of operations performed in the backpropagation phase, the proposed method is novel in that it focuses on reducing the number of operations performed by the network in the forward propagation during training. The proposed training strategy has been validated on two widely used architecture families: VGG and ResNet. Experiments on MNIST, CIFAR-10 and Imagenette show that, with the proposed method, the training time of the models is more than halved without significantly impacting accuracy. The FLOPs reduction in the forward propagation during training ranges from 17.83\% for VGG-11 to 83.74\% for ResNet-152. These results demonstrate the effectiveness of the proposed technique in speeding up learning of CNNs. The technique will be especially useful in applications where fine-tuning or online training of convolutional models is required, for instance because data arrive sequentially.