Restoring Neural Network Plasticity for Faster Transfer Learning

arXiv cs.CV / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses “loss of neural plasticity” in transfer learning, where ImageNet-pretrained weights can saturate and produce insignificant gradients that slow or prevent adaptation to a downstream task.
  • It proposes a targeted weight re-initialization step before fine-tuning to restore plasticity and enable more effective learning on atypical or domain-shifted datasets.
  • Experiments on multiple image classification benchmarks show improvements for both CNNs and vision transformers, including higher test accuracy and faster convergence.
  • The authors report negligible computational overhead and compatibility with standard transfer learning pipelines, making the method practical to adopt.
  • The work positions neural-plasticity restoration as a relatively understudied complement to continual learning approaches within the transfer learning setting.

Abstract

Transfer learning with models pretrained on ImageNet has become a standard practice in computer vision. Transfer learning refers to fine-tuning pretrained weights of a neural network on a downstream task, typically unrelated to ImageNet. However, pretrained weights can become saturated and may yield insignificant gradients, failing to adapt to the downstream task. This hinders the ability of the model to train effectively, and is commonly referred to as loss of neural plasticity. Loss of plasticity may prevent the model from fully adapting to the target domain, especially when the downstream dataset is atypical in nature. While this issue has been widely explored in continual learning, it remains relatively understudied in the context of transfer learning. In this work, we propose the use of a targeted weight re-initialization strategy to restore neural plasticity prior to fine-tuning. Our experiments show that both convolutional neural networks (CNNs) and vision transformers (ViTs) benefit from this approach, yielding higher test accuracy with faster convergence on several image classification benchmarks. Our method introduces negligible computational overhead and is compatible with common transfer learning pipelines.