AI Navigate

Ghosts of Softmax: Complex Singularities That Limit Safe Step Sizes in Cross-Entropy

arXiv cs.LG / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper shows that cross-entropy optimization is constrained by complex singularities of the softmax partition function, which create logarithmic loss singularities and cap the Taylor convergence radius.
  • For the binary case they derive an exact radius rho* = sqrt(delta^2 + pi^2)/Delta_a, and for multiclass they obtain a lower bound rho_a = pi/Delta_a, where Delta_a is the spread of directional logit derivatives.
  • The bound can be computed with a single Jacobian-vector product and explains that samples near a decision flip and highly sensitive to the proposed direction tighten the radius, making some updates fragile.
  • A simple controller enforcing tau <= rho_a improves stability, surviving extreme learning-rate spikes (up to 10,000x) where standard gradient clipping fails.
  • Temperature scaling and normalization by rho_a shrink the onset threshold spread dramatically (from ~0.992 to ~0.164), highlighting a geometric constraint on optimization beyond Hessian curvature.

Abstract

Optimization analyses for cross-entropy training rely on local Taylor models of the loss to predict whether a proposed step will decrease the objective. These surrogates are reliable only inside the Taylor convergence radius of the true loss along the update direction. That radius is set not by real-line curvature alone but by the nearest complex singularity. For cross-entropy, the softmax partition function F=\sum_j \exp(z_j) has complex zeros -- ``ghosts of softmax'' -- that induce logarithmic singularities in the loss and cap this radius. To make this geometry usable, we derive closed-form expressions under logit linearization along the proposed update direction. In the binary case, the exact radius is \rho^*=\sqrt{\delta^2+ \pi^2}/\Delta_a. In the multiclass case, we obtain the lower bound \rho_a=\pi/\Delta_a, where \Delta_a=\max_k a_k-\min_k a_k is the spread of directional logit derivatives a_k=\nabla z_k\cdot v. This bound costs one Jacobian-vector product and reveals what makes a step fragile: samples that are both near a decision flip and highly sensitive to the proposed direction tighten the radius. The normalized step size r=\tau/\rho_a separates safe from dangerous updates. Across six tested architectures and multiple step directions, no model fails for r<1, yet collapse appears once r\ge 1. Temperature scaling confirms the mechanism: normalizing by \rho_a shrinks the onset-threshold spread from standard deviation 0.992 to 0.164. A controller that enforces \tau\le\rho_a survives learning-rate spikes up to 10{,} 000\times in our tests, where gradient clipping still collapses. Together, these results identify a geometric constraint on cross-entropy optimization that operates through Taylor convergence rather than Hessian curvature.