Random Cloud: Finding Minimal Neural Architectures Without Training

arXiv cs.LG / 4/30/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes “Random Cloud,” a training-free neural architecture search method that finds minimal feedforward network topologies without using backpropagation.
  • Instead of the train–prune–retrain cycle used by post-training pruning, Random Cloud evaluates randomly initialized networks and progressively reduces their structure.
  • The method trains only the final best minimal candidate, reducing compute by avoiding full training of the original large model.
  • Experiments on seven classification benchmarks show Random Cloud matches or exceeds pruning baselines on six datasets, including a statistically significant gain on Sonar with 87% parameter reduction.
  • Runtime is improved in most cases, being faster than magnitude and random pruning baselines on four of five datasets, at roughly 0.67–0.94× the cost of full training.

Abstract

I propose the \emph{Random Cloud} method, a training-free approach to neural architecture search that discovers minimal feedforward network topologies through stochastic exploration and progressive structural reduction. Unlike post-training pruning methods that require a full train-prune-retrain cycle, this method evaluates randomly initialized networks without backpropagation, progressively reduces their topology, and only trains the best minimal candidate at the end. I evaluate on 7 classification benchmarks against magnitude pruning and random pruning baselines. The Random Cloud matches or outperforms both baselines in 6 of 7 datasets, achieving statistically significant improvements on Sonar (+4.9pp accuracy, p{=}0.017 vs magnitude pruning) with 87\% parameter reduction. Crucially, the method is faster than both pruning baselines in 4 of 5 datasets (0.67--0.94\times the cost of full training), since it avoids training the full-size network entirely.