DeepWeightFlow: Re-Basined Flow Matching for Generating Neural Network Weights

arXiv stat.ML / 5/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • DeepWeightFlow proposes a flow-matching generative model that directly generates complete neural network weights in weight space, addressing limitations of prior methods that either generate only partial weights or suffer from speed/finetuning requirements.
  • The approach claims high accuracy and strong scaling across different architectures, network sizes, and data modalities, with generated networks reportedly not needing fine-tuning to perform well.
  • To handle permutation symmetries in neural networks and improve efficiency for larger models, DeepWeightFlow applies Git Re-Basin and TransFusion for neural-network canonicalization within the generative-weight setting.
  • The paper reports strong transfer learning performance and very fast ensemble generation, stating that hundreds of neural networks can be produced in minutes—substantially faster than diffusion-based methods.
  • Overall, the work aims to enable more efficient and scalable generation of diverse sets of neural networks, potentially accelerating downstream model development and experimentation.

Abstract

Building efficient and effective generative models for neural network weights has been a research focus of significant interest that faces challenges posed by the high-dimensional weight spaces of modern neural networks and their symmetries. Several prior generative models are limited to generating partial neural network weights, particularly for larger models, such as ResNet and ViT. Those that do generate complete weights struggle with generation speed or require finetuning of the generated models. In this work, we present DeepWeightFlow, a Flow Matching model that operates directly in weight space to generate diverse and high-accuracy neural network weights for a variety of architectures, neural network sizes, and data modalities. The neural networks generated by DeepWeightFlow do not require fine-tuning to perform well and can scale to large networks. We apply Git Re-Basin and TransFusion for neural network canonicalization in the context of generative weight models to account for the impact of neural network permutation symmetries and to improve generation efficiency for larger model sizes. The generated networks excel at transfer learning, and ensembles of hundreds of neural networks can be generated in minutes, far exceeding the efficiency of diffusion-based methods. DeepWeightFlow models pave the way for more efficient and scalable generation of diverse sets of neural networks.