On the Role of Batch Size in Stochastic Conditional Gradient Methods

arXiv cs.LG / 3/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes how batch size affects momentum-based stochastic conditional gradient methods (including Scion) when optimization satisfies a 8-Kurdyka-9ojasiewicz (c-L) condition.
  • It derives a new theoretical framework that explicitly models the interaction among stepsize, batch size, and stochastic noise, showing a regime-dependent effect on convergence/accuracy.
  • The results indicate that increasing batch size can improve performance up to a critical threshold, after which gains saturate and may worsen under a fixed token budget.
  • The theory predicts an optimal stepsize magnitude that matches large-scale training practices, and the authors provide principled guidelines for choosing batch size and stepsize.
  • Experiments on NanoGPT support the predicted scaling regimes, and the paper proposes an adaptive training strategy that increases batch size and sequence length while maintaining convergence guarantees.

Abstract

We study the role of batch size in stochastic conditional gradient methods under a \mu-Kurdyka-{\L}ojasiewicz (\mu-KL) condition. Focusing on momentum-based stochastic conditional gradient algorithms (e.g., Scion), we derive a new analysis that explicitly captures the interaction between stepsize, batch size, and stochastic noise. Our study reveals a regime-dependent behavior: increasing the batch size initially improves optimization accuracy but, beyond a critical threshold, the benefits saturate and can eventually degrade performance under a fixed token budget. Notably, the theory predicts the magnitude of the optimal stepsize and aligns well with empirical practices observed in large-scale training. Leveraging these insights, we derive principled guidelines for selecting the batch size and stepsize, and propose an adaptive strategy that increases batch size and sequence length during training while preserving convergence guarantees. Experiments on NanoGPT are consistent with the theoretical predictions and illustrate the emergence of the predicted scaling regimes. Overall, our results provide a theoretical framework for understanding batch size scaling in stochastic conditional gradient methods and offer guidance for designing efficient training schedules in large-scale optimization.