Dynamic Scaled Gradient Descent for Stable Fine-Tuning for Classifications

arXiv cs.LG / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses instability in fine-tuning pretrained models on sparse, imbalanced classification datasets, where training can collapse and degrade performance.
  • It identifies gradient cancellation across training examples as a potential cause of the collapsed optimization behavior.
  • The authors propose Dynamic Scaled Gradient Descent ("DynaScaled"), which rescales gradients for correctly classified examples using a dynamic scaling factor.
  • Experiments across multiple benchmark datasets, tasks, and large pretrained models show improved training stability, reduced performance variance, and higher accuracy than existing methods.
  • The approach provides both theoretical and empirical evidence that manipulating example-level gradients can mitigate collapsed training dynamics.

Abstract

Fine-tuning pretrained models has become a standard approach to adapting pretrained knowledge to improve the accuracy on new sparse, imbalance datasets. However, issues arise when optimization falls into a collapsed state, where the model gets stuck, leading to degraded performance and unstable training. One possible reason for this is the cancellation of gradients across training examples. To address this problem, we propose a novel algorithm, dynamic scaled gradient descent (\mName), that directly modifies the gradients returned by training examples, specifically, scaling down the gradients of correctly classified examples using a dynamic scaler. This strategy offers both theoretical and empirical advantages in improving training stability. Experiments on a variety of benchmark datasets, spanning multiple tasks and large pretrained models, demonstrate that our method consistently reduces performance variance and surpasses the accuracy of existing approaches.