Chain-of-Models Pre-Training: Rethinking Training Acceleration of Vision Foundation Models

arXiv cs.CV / 4/15/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Chain-of-Models Pre-Training (CoM-PT), a training-acceleration method for vision foundation models that targets the entire model family rather than training each model independently.
  • CoM-PT builds a “model chain” in ascending order of model size, where only the smallest model is fully pre-trained and larger models learn via sequential inverse knowledge transfer by reusing knowledge across parameter and feature spaces.
  • Experiments on 45 datasets covering zero-shot and fine-tuning show that CoM-PT achieves mostly better-than-baseline performance while significantly reducing training cost.
  • The method exhibits efficient scaling behavior, including cases where training more models increases overall efficiency, such as up to 72% computational complexity reduction for a ViT-L largest-model setup.
  • The authors report that as model family size grows (e.g., 3→4→7 models), the acceleration ratio can jump substantially, and they open-source the code with suggested extensions to more computation-heavy settings like large language model pre-training.

Abstract

In this paper, we present Chain-of-Models Pre-Training (CoM-PT), a novel performance-lossless training acceleration method for vision foundation models (VFMs). This approach fundamentally differs from existing acceleration methods in its core motivation: rather than optimizing each model individually, CoM-PT is designed to accelerate the training pipeline at the model family level, scaling efficiently as the model family expands. Specifically, CoM-PT establishes a pre-training sequence for the model family, arranged in ascending order of model size, called model chain. In this chain, only the smallest model undergoes standard individual pre-training, while the other models are efficiently trained through sequential inverse knowledge transfer from their smaller predecessors by jointly reusing the knowledge in the parameter space and the feature space. As a result, CoM-PT enables all models to achieve performance that is mostly superior to standard individual training while significantly reducing training cost, and this is extensively validated across 45 datasets spanning zero-shot and fine-tuning tasks. Notably, its efficient scaling property yields a remarkable phenomenon: training more models even results in higher efficiency. For instance, when pre-training on CC3M: i) given ViT-L as the largest model, progressively prepending smaller models to the model chain reduces computational complexity by up to 72%; ii) within a fixed model size range, as the VFM family scales across 3, 4, and 7 models, the acceleration ratio of CoM-PT exhibits a striking leap: from 4.13X to 5.68X and 7.09X. Since CoM-PT is naturally agnostic to specific pre-training paradigms, we open-source the code to spur further extensions in more computationally intensive scenarios, such as large language model pre-training.