Abstract
Optimization analyses for cross-entropy training rely on local Taylor models of the loss to predict whether a proposed step will decrease the objective. These surrogates are reliable only inside the Taylor convergence radius of the true loss along the update direction. That radius is set not by real-line curvature alone but by the nearest complex singularity. For cross-entropy, the softmax partition function F=\sum_j \exp(z_j) has complex zeros -- ``ghosts of softmax'' -- that induce logarithmic singularities in the loss and cap this radius. To make this geometry usable, we derive closed-form expressions under logit linearization along the proposed update direction. In the binary case, the exact radius is \rho^*=\sqrt{\delta^2+ \pi^2}/\Delta_a. In the multiclass case, we obtain the lower bound \rho_a=\pi/\Delta_a, where \Delta_a=\max_k a_k-\min_k a_k is the spread of directional logit derivatives a_k=\nabla z_k\cdot v. This bound costs one Jacobian-vector product and reveals what makes a step fragile: samples that are both near a decision flip and highly sensitive to the proposed direction tighten the radius. The normalized step size r=\tau/\rho_a separates safe from dangerous updates. Across six tested architectures and multiple step directions, no model fails for r<1, yet collapse appears once r\ge 1. Temperature scaling confirms the mechanism: normalizing by \rho_a shrinks the onset-threshold spread from standard deviation 0.992 to 0.164. A controller that enforces \tau\le\rho_a survives learning-rate spikes up to 10{,} 000\times in our tests, where gradient clipping still collapses. Together, these results identify a geometric constraint on cross-entropy optimization that operates through Taylor convergence rather than Hessian curvature.