Weak-to-Strong Knowledge Distillation Accelerates Visual Learning

arXiv cs.CV / 4/20/2026

📰 NewsModels & Research

Key Points

  • The paper proposes a new knowledge distillation strategy that uses a weaker teacher to accelerate the training of a strong student, reversing the usual teacher→student “compression/accuracy” direction.
  • It introduces a plug-and-play recipe that freezes the weaker teacher, applies distillation only during early training, and then disables it once the student reaches or surpasses teacher-level performance.
  • Experiments on ImageNet and CIFAR classification show much earlier attainment of target thresholds, achieving up to 4.8× faster training measured in epochs.
  • The approach generalizes to other vision tasks, including object detection on COCO (1.7× epoch speedup) and diffusion-based generation on CIFAR-10 (2.5× earlier target-FID crossing measured in steps).
  • Overall, the results suggest the method can act as a universal training speedup mechanism for visual learning across multiple modalities and benchmarks.

Abstract

Large-scale visual learning is increasingly limited by training cost. Existing knowledge distillation methods transfer from a stronger teacher to a weaker student for compression or final-accuracy improvement. We instead investigate distillation to accelerate the training of strong students. We propose a generalizable plug-and-play recipe that freezes a weaker teacher, applies distillation only in early training, and turns it off once the student reaches and surpasses teacher-level performance. For ImageNet and CIFAR classification, this strategy reaches target thresholds much earlier, with up to 4.8 times speedup measured by epochs. We confirm that the method generalizes to other tasks and report 1.7 times epoch speedup for object detection on the COCO dataset, and 2.5 times earlier target-FID crossing for diffusion generation on the CIFAR-10 dataset, measured in steps. These findings validate our method as a universal speedup mechanism for visual learning.