ConMeZO: Adaptive Descent-Direction Sampling for Gradient-Free Finetuning of Large Language Models

arXiv stat.ML / 4/21/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces ConMeZO, a derivative-free (zeroth-order) optimization method for fine-tuning large language models that avoids backpropagation memory overhead.
  • ConMeZO speeds up convergence by adaptively sampling descent directions within a cone around a momentum-based estimate instead of using uniform random directions in the high-dimensional parameter space.
  • The authors provide theoretical analysis showing ConMeZO matches MeZO’s worst-case convergence rate.
  • Experiments on natural-language fine-tuning tasks indicate ConMeZO can be up to 2× faster than MeZO while maintaining the low-memory benefits of zeroth-order training.

Abstract

Zeroth-order or derivative-free optimization (MeZO) is an attractive strategy for finetuning large language models (LLMs) because it eliminates the memory overhead of backpropagation. However, it converges slowly due to the inherent curse of dimensionality when searching for descent directions in the high-dimensional parameter space of billion-scale LLMs. We propose ConMeZO, a novel zeroth-order optimizer that accelerates convergence by adaptive directional sampling. Instead of drawing the direction uniformly at random, ConMeZO restricts the sampling to a cone centered around a momentum estimate. This concentrates the search in directions where the true gradient is more likely to lie and thus reduces the effect of high dimensions. We prove that ConMeZO achieves the same worst-case convergence rate as MeZO. Empirically, when finetuning LLMs on natural language tasks, ConMeZO is up to 2X faster than MeZO while retaining the low-memory footprint of zeroth-order methods.