GRASS: Gradient-based Adaptive Layer-wise Importance Sampling for Memory-efficient Large Language Model Fine-tuning

arXiv cs.CL / 4/10/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes GRASS, a memory-efficient full-parameter fine-tuning framework that improves on layer-wise importance sampling by making it adaptive to both tasks and training stages.
  • GRASS estimates layer importance using mean gradient norms, enabling sampling decisions that reflect how different layers matter at different points in training.
  • It further adapts layer sampling probabilities during training, aiming to preserve or improve downstream performance relative to prior static layer importance approaches.
  • The method includes a layer-wise optimizer state offloading technique that overlaps computation and communication to reduce GPU memory usage without significantly hurting training throughput.
  • Experiments across multiple models and benchmarks show GRASS consistently outperforms existing state-of-the-art methods, with reported average accuracy gains up to 4.38 points and memory reductions up to 19.97%.

Abstract

Full-parameter fine-tuning of large language models is constrained by substantial GPU memory requirements. Low-rank adaptation methods mitigate this challenge by updating only a subset of parameters. However, these approaches often limit model expressiveness and yield lower performance than full-parameter fine-tuning. Layer-wise fine-tuning methods have emerged as an alternative, enabling memory-efficient training through static layer importance sampling strategies. However, these methods overlook variations in layer importance across tasks and training stages, resulting in suboptimal performance on downstream tasks. To address these limitations, we propose GRASS, a gradient-based adaptive layer-wise importance sampling framework. GRASS utilizes mean gradient norms as a task-aware and training-stage-aware metric for estimating layer importance. Furthermore, GRASS adaptively adjusts layer sampling probabilities through an adaptive training strategy. We also introduce a layer-wise optimizer state offloading mechanism that overlaps computation and communication to further reduce memory usage while maintaining comparable training throughput. Extensive experiments across multiple models and benchmarks demonstrate that GRASS consistently outperforms state-of-the-art methods, achieving an average accuracy improvement of up to 4.38 points and reducing memory usage by up to 19.97\%.