CurvZO: Adaptive Curvature-Guided Sparse Zeroth-Order Optimization for Efficient LLM Fine-Tuning

arXiv cs.AI / 2026/3/24

📰 ニュースSignals & Early TrendsIdeas & Deep AnalysisModels & Research

要点

  • CurvZO is a proposed adaptive curvature-guided sparse zeroth-order (ZO) optimization method designed to enable more memory-efficient LLM fine-tuning when backprop is impractical on resource-constrained hardware.
  • It improves upon prior sparse ZO approaches by using curvature signals inferred online from scalar feedback to build a parameter-wise sampling distribution, reducing the variance of the ZO gradient estimator.
  • CurvZO also adapts the perturbation budget dynamically based on how the curvature signal distribution evolves, balancing focused updates with sufficient exploration.
  • Experiments on OPT and Llama across multiple NLP tasks show consistent gains over ZO baselines, including up to +4.4 accuracy points and up to 2× training speedups while maintaining memory efficiency.

Abstract

Fine-tuning large language models (LLMs) with backpropagation achieves high performance but incurs substantial memory overhead, limiting scalability on resource-constrained hardware. Zeroth-order (ZO) optimization provides a memory-efficient alternative by relying solely on forward passes, yet it typically suffers from slow or unstable convergence due to high-variance gradient estimates. Sparse ZO updates partially address this issue by perturbing only a subset of parameters, but their effectiveness hinges on selecting informative parameters, which is challenging in ZO optimization because each query yields only scalar feedback. We propose \textbf{Adaptive Curvature-Guided Sparse Zeroth-Order Optimization (CurvZO)}, which tracks curvature signals online from scalar ZO feedback and leverages these signals to construct a parameter-wise sampling distribution for selecting coordinates at each update, reducing the variance of the sparse ZO gradient estimator. Moreover, CurvZO dynamically adapts the perturbation budget to the evolving curvature signal distribution, yielding sparse ZO updates that remain both focused and sufficiently exploratory. Extensive experiments on OPT and Llama across diverse NLP tasks show that CurvZO consistently improves fine-tuning performance and reduces training time over ZO baselines. It improves accuracy by up to 4.4 points and achieves up to a 2\times speedup, while preserving memory efficiency.