Growing Networks with Autonomous Pruning

arXiv cs.CV / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • GNAP introduces Growing Networks with Autonomous Pruning, a method for image classification where network size and parameter count can change during training to fit data with as few parameters as possible.
  • It combines two mechanisms: growth phases that expand the model when saturation occurs and autonomous pruning during non-growing phases guided by gradient descent.
  • Experiments show extremely sparse networks achieving high accuracy, e.g., MNIST at 99.44% with 6.2k parameters and CIFAR-10 at 92.2% with 157.8k parameters.
  • The results suggest GNAP can enable efficient, resource-constrained deployment and invite rethinking of fixed-size CNN architectures.

Abstract

This paper introduces Growing Networks with Autonomous Pruning (GNAP) for image classification. Unlike traditional convolutional neural networks, GNAP change their size, as well as the number of parameters they are using, during training, in order to best fit the data while trying to use as few parameters as possible. This is achieved through two complementary mechanisms: growth and pruning. GNAP start with few parameters, but their size is expanded periodically during training to add more expressive power each time the network has converged to a saturation point. Between these growing phases, model parameters are trained for classification and pruned simultaneously, with complete autonomy by gradient descent. Growing phases allow GNAP to improve their classification performance, while autonomous pruning allows them to keep as few parameters as possible. Experimental results on several image classification benchmarks show that our approach can train extremely sparse neural networks with high accuracy. For example, on MNIST, we achieved 99.44% accuracy with as few as 6.2k parameters, while on CIFAR10, we achieved 92.2\ accuracy with 157.8k parameters.