Robust and Fast Training via Per-Sample Clipping

arXiv stat.ML / 5/5/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces PS-Clip-SGD, a robust gradient estimator that uses per-sample gradient clipping to improve training stability under heavy-tailed gradient noise.
  • The authors provide theoretical results, showing optimal in-expectation convergence rates for non-convex optimization and high-probability convergence guarantees with only polylogarithmic overhead in failure probability.
  • Experiments indicate that PS-Clip-SGD trains AlexNet on CIFAR-100 more effectively than both SGD with momentum and standard (global) gradient clipping, even after considering the extra compute from per-sample clipping.
  • The study also finds that with gradient accumulation, clipping at the mini-batch level can improve performance with essentially no additional computational cost, challenging the common approach of clipping only after completing all accumulation steps.

Abstract

We propose a robust gradient estimator based on per-sample gradient clipping and analyze its properties both theoretically and empirically. We show that the resulting method, per-sample clipped SGD (PS-Clip-SGD), achieves optimal in-expectation convergence rates for non-convex optimization problems under heavy-tailed gradient noise. Moreover, we establish high-probability convergence guarantees that match the in-expectation rates up to polylogarithmic factors in the failure probability. We complement our theoretical results with multiple numerical experiments. In particular, we demonstrate that PS-Clip-SGD outperforms both vanilla SGD with momentum and standard gradient clipping when training AlexNet on the CIFAR-100 dataset, even after accounting for the additional computational time caused by per-sample clipping. We also empirically show that, in the presence of gradient accumulation, applying clipping at the mini-batch level can improve training performance while incurring virtually no additional computational cost. This finding is particularly interesting, as it contradicts the common practice of applying clipping only after all accumulation steps have been completed.