Abstract
When training a neural network for classification, the feature vectors of the training set are known to collapse to the vertices of a regular simplex, provided the dimension d of the feature space and the number n of classes satisfies n\leq d+1. This phenomenon is known as neural collapse. For other applications like language models, one instead takes n\gg d. Here, the neural collapse phenomenon still occurs, but with different emergent geometric figures. We characterize these geometric figures in the orthoplex regime where d+2\leq n\leq 2d. The techniques in our analysis primarily involve Radon's theorem and convexity.