Model Evolution Under Zeroth-Order Optimization: A Neural Tangent Kernel Perspective

arXiv cs.LG / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies zeroth-order (ZO) optimization for neural networks, where gradients are estimated using only forward passes and backpropagation is avoided to save memory.
  • It introduces the Neural Zeroth-order Kernel (NZK) to characterize how neural models evolve in function space under ZO updates, addressing the difficulty caused by noisy stochastic gradient estimates.
  • For linear models, the authors prove that the expected NZK is invariant during training and derive a closed-form model evolution under squared loss based on moments of the random perturbation directions.
  • The analysis extends to linearized neural networks, interpreting ZO updates as a form of kernel gradient descent under the NZK framework.
  • Experiments on MNIST, CIFAR-10, and Tiny ImageNet support the theory and show convergence acceleration when using a single shared random vector.

Abstract

Zeroth-order (ZO) optimization enables memory-efficient training of neural networks by estimating gradients via forward passes only, eliminating the need for backpropagation. However, the stochastic nature of gradient estimation significantly obscures the training dynamics, in contrast to the well-characterized behavior of first-order methods under Neural Tangent Kernel (NTK) theory. To address this, we introduce the Neural Zeroth-order Kernel (NZK) to describe model evolution in function space under ZO updates. For linear models, we prove that the expected NZK remains constant throughout training and depends explicitly on the first and second moments of the random perturbation directions. This invariance yields a closed-form expression for model evolution under squared loss. We further extend the analysis to linearized neural networks. Interpreting ZO updates as kernel gradient descent via NZK provides a novel perspective for potentially accelerating convergence. Extensive experiments across synthetic and real-world datasets (including MNIST, CIFAR-10, and Tiny ImageNet) validate our theoretical results and demonstrate acceleration when using a single shared random vector.