Neural Network Optimization Reimagined: Decoupled Techniques for Scratch and Fine-Tuning

arXiv cs.CV / 4/28/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces DualOpt, an optimizer framework that separates optimization strategies for training from scratch versus fine-tuning pre-trained neural networks.
  • For scratch training, DualOpt adds real-time, layer-wise weight decay to improve convergence and generalization in a way that matches layer update behavior and network architecture.
  • For fine-tuning, it couples an optimizer-integrated weight rollback term into every update step to keep weight distributions consistent between upstream and downstream models, reducing knowledge forgetting.
  • It further extends layer-wise weight decay to dynamically adjust rollback levels across layers based on downstream task needs, aiming for better adaptation.
  • Experiments on image classification, object detection, semantic segmentation, and instance segmentation show DualOpt’s broad applicability and state-of-the-art results, with code provided on GitHub.

Abstract

With the accumulation of resources in the era of big data and the rise of pre-trained models in deep learning, optimizing neural networks for various tasks often involves different strategies for fine-tuning pre-trained models versus training from scratch. However, existing optimizers primarily focus on reducing the loss function by updating model parameters, without fully addressing the unique demands of these two major paradigms. In this paper, we propose DualOpt, a novel approach that decouples optimization techniques specifically tailored for these distinct training scenarios. For training from scratch, we introduce real-time layer-wise weight decay, designed to enhance both convergence and generalization by aligning with the characteristics of weight updates and network architecture. For more importantly fine-tuning, we integrate weight rollback with the optimizer, incorporating a rollback term into each weight update step. This ensures consistency in the weight distribution between upstream and downstream models, effectively mitigating knowledge forgetting and improving fine-tuning performance. Additionally, we extend the layer-wise weight decay to dynamically adjust the rollback levels across layers, adapting to the varying demands of different downstream tasks. Extensive experiments across diverse tasks, including image classification, object detection, semantic segmentation, and instance segmentation, demonstrate the broad applicability and state-of-the-art performance of DualOpt. Code is available at https://github.com/qklee-lz/OLOR-AAAI-2024.