Optimal Rates for Pure {\varepsilon}-Differentially Private Stochastic Convex Optimization with Heavy Tails
arXiv cs.LG / 4/9/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates stochastic convex optimization under heavy-tailed gradients while enforcing pure ε-differential privacy, replacing worst-case Lipschitz assumptions with only bounded k-th moment conditions.
- It resolves an open problem by characterizing the minimax optimal excess-risk rate for pure ε-DP heavy-tailed SCO (up to logarithmic factors), matching the known approximate (ε,δ)-DP rates in this setting.
- The authors propose a polynomial-time algorithm that achieves the optimal rate with high probability, and—under additional conditions such as polynomially bounded worst-case Lipschitz parameters—runs in polynomial time with probability 1.
- For several structured loss classes (e.g., hinge/ReLU-type and absolute-value losses over Euclidean balls/ellipsoids/polytopes), the same excess-risk guarantee is obtained with polynomial-time complexity with probability 1 even when the worst-case Lipschitz constant is infinite.
- The method is built on a new framework for privately optimizing Lipschitz extensions of the empirical loss, supported by a high-probability lower bound on excess risk.
Related Articles

Why Anthropic’s new model has cybersecurity experts rattled
Reddit r/artificial
Does the AI 2027 paper still hold any legitimacy?
Reddit r/artificial

Why Most Productivity Systems Fail (And What to Do Instead)
Dev.to

Moving from proof of concept to production: what we learned with Nometria
Dev.to

Frontend Engineers Are Becoming AI Trainers
Dev.to