Learning Dynamics of Zeroth-Order Optimization: A Kernel Perspective

arXiv cs.LG / 5/6/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the known theory that zeroth-order (ZO) optimization typically slows down with model dimension compared to first-order methods, creating a mismatch with practice.
  • It derives the one-step learning dynamics of ZO SGD and shows that the empirical Neural Tangent Kernel (eNTK) emerges as the central quantity controlling learning behavior.
  • The authors interpret elements of the ZO-produced eNTK as inner products of neural tangent vectors projected onto a random low-dimensional subspace.
  • Using the Johnson–Lindenstrauss Lemma, they argue that how faithfully ZO eNTK is approximated depends mainly on the number of perturbations rather than the full parameter dimension.
  • They conclude that the resulting dimension-free approximation error helps explain why ZO methods can scale to fine-tuning large language models despite theoretical concerns.

Abstract

Classical optimization theory establishes that zeroth-order (ZO) algorithms suffer from a dimension-dependent slowdown, with convergence rates typically scaling with the model dimension compared to first-order methods. However, in contrast to these theoretical expectations, a growing body of recent work demonstrates the successful application of ZO methods to fine-tuning Large Language Models (LLMs) with billions of parameters. To explain this paradox, we derive the one-step learning dynamics of ZO SGD, where the empirical Neural Tangent Kernel (eNTK) naturally emerges as the key term governing the learning behavior. Inspection of the eNTK produced by ZO SGD reveals that each element corresponds to the inner product of neural tangent vectors projected onto a random low-dimensional subspace. Thus, by invoking the Johnson-Lindenstrauss Lemma, our analysis shows that the fidelity of the ZO eNTK is governed primarily by the number of perturbations. Crucially, the approximation error depends on the model output size rather than the massive parameter dimension. This dimension-free property provides a theoretical justification for the scalability of ZO methods to LLMs finetuning tasks. We believe that this kernel-based framework offers a novel perspective for understanding ZO methods within the context of learning dynamics.