VoodooNet: Achieving Analytic Ground States via High-Dimensional Random Projections

arXiv cs.AI / 4/20/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • VoodooNet proposes a non-iterative neural network architecture that replaces SGD/backprop with a closed-form analytic solution using a “Galactic Expansion” projection into a very high-dimensional space.
  • The method projects input manifolds into a high-entropy “Galactic” space (with dimension d far larger than typical MNIST-scale features) and then computes the output layer in a single step via the Moore–Penrose pseudoinverse.
  • Experiments report 98.10% accuracy on MNIST and 86.63% on Fashion-MNIST, with Fashion-MNIST outperforming a 10-epoch SGD baseline while cutting training time by orders of magnitude.
  • The paper observes a near-logarithmic scaling relationship between projection dimensionality and accuracy, implying performance may depend more on the “Galactic” volume than on iterative refinement.
  • The authors position “Magic Hat” as a potential approach for real-time Edge AI by bypassing the traditional training phase in favor of instantaneous manifold discovery.

Abstract

We present VoodooNet, a non-iterative neural architecture that replaces the stochastic gradient descent (SGD) paradigm with a closed-form analytic solution via Galactic Expansion. By projecting input manifolds into a high-dimensional, high-entropy "Galactic" space (d \gg 784), we demonstrate that complex features can be untangled without the thermodynamic cost of backpropagation. Utilizing the Moore-Penrose pseudoinverse to solve for the output layer in a single step, VoodooNet achieves a classification accuracy of \textbf{98.10\% on MNIST} and \textbf{86.63\% on Fashion-MNIST}. Notably, our results on Fashion-MNIST surpass a 10-epoch SGD baseline (84.41\%) while reducing the training time by orders of magnitude. We observe a near-logarithmic scaling law between dimensionality and accuracy, suggesting that performance is a function of "Galactic" volume rather than iterative refinement. This "Magic Hat" approach offers a new frontier for real-time Edge AI, where the traditional training phase is bypassed in favor of instantaneous manifold discovery.