AI Navigate

Neural Thickets: Diverse Task Experts Are Dense Around Pretrained Weights

arXiv cs.LG / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Pretraining yields a distribution over parameter vectors, with task-specific experts already present in the support of that distribution.
  • In small models, expert solutions occupy a negligible fraction of the parameter space, while in large pretrained models the density of task-experts around the weights becomes substantially higher.
  • The authors propose a simple, fully parallel post-training method: sample N random parameter perturbations, select the top K, and ensemble predictions via majority vote.
  • This approach is competitive with standard post-training methods (PPO, GRPO, ES) for contemporary large-scale models, despite its simplicity.

Abstract

Pretraining produces a learned parameter vector that is typically treated as a starting point for further iterative adaptation. In this work, we instead view the outcome of pretraining as a distribution over parameter vectors, whose support already contains task-specific experts. We show that in small models such expert solutions occupy a negligible fraction of the volume of this distribution, making their discovery reliant on structured optimization methods such as gradient descent. In contrast, in large, well-pretrained models the density of task-experts increases dramatically, so that diverse, task-improving specialists populate a substantial fraction of the neighborhood around the pretrained weights. Motivated by this perspective, we explore a simple, fully parallel post-training method that samples N parameter perturbations at random, selects the top K, and ensembles predictions via majority vote. Despite its simplicity, this approach is competitive with standard post-training methods such as PPO, GRPO, and ES for contemporary large-scale models.