Constraint-based Pre-training: From Structured Constraints to Scalable Model Initialization

arXiv cs.LG / 4/17/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that conventional pre-training produces fixed-scale models, which can underperform when deployment requires different model sizes than those used during training.
  • It proposes a constraint-based pre-training framework that separates size-agnostic knowledge into reusable weight templates, while handling size-specific adaptation through lightweight weight scalers.
  • The approach reformulates variable-sized model initialization as a multi-task adaptation problem, enabling flexible construction of model weights for different downstream scales.
  • The proposed method, WeiT, uses Kronecker-based constraints to regularize pre-training, representing parameters via concatenation and weighted aggregation of templates controlled by learned lightweight scalers from limited data.
  • Extensive experiments report state-of-the-art results for initializing models with varying depths and widths across perception and embodied learning tasks, and benefits are shown for both Transformer and convolution-based architectures with faster convergence and better performance even in full training.

Abstract

The pre-training and fine-tuning paradigm has become the dominant approach for model adaptation. However, conventional pre-training typically yields models at a fixed scale, whereas practical deployment often requires models of varying sizes, exposing its limitations when target model scales differ from those used during pre-training. To address this, we propose an innovative constraint-based pre-training paradigm that imposes structured constraints during pre-training to disentangle size-agnostic knowledge into reusable weight templates, while assigning size-specific adaptation to lightweight weight scalers, thereby reformulating variable-sized model initialization as a multi-task adaptation problem. Within this paradigm, we further introduce WeiT, which employs Kronecker-based constraints to regularize the pre-training process. Specifically, model parameters are represented as compositions of weight templates via concatenation and weighted aggregation, with adaptive connections governed by lightweight weight scalers whose parameters are learned from limited data. This design enables flexible and efficient construction of model weights across diverse downstream scales. Extensive experiments demonstrate the efficiency and effectiveness of WeiT, achieving state-of-the-art performance in initializing models with varying depths and widths across a broad range of perception and embodied learning tasks, including Image Classification, Image Generation, and Embodied Control. Moreover, its effectiveness generalizes to both Transformer-based and Convolution-based architectures, consistently enabling faster convergence and improved performance even under full training.