MEDiC: Multi-objective Exploration of Distillation from CLIP

arXiv cs.CV / 4/1/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • MEDiC is a new distillation framework that unifies masked image modeling in both pixel space and latent feature space by combining patch-level token distillation from a frozen CLIP encoder, global CLS alignment, and pixel reconstruction with a lightweight decoder.
  • Experiments show that the three objectives are complementary, with the full combination achieving 73.9% kNN accuracy on ImageNet-1K using ViT-Base.
  • The paper investigates evolved masking strategies using hierarchical clustering and relative position bias, but finds that evolved masking does not improve teacher-guided distillation over simpler block masking, likely due to the teacher’s built-in semantic awareness.
  • It reports high sensitivity to loss weighting, where small perturbations to scalar loss weights can reduce kNN accuracy by as much as 17 percentage points.
  • The authors report overall performance of 73.9% kNN accuracy and 85.1% fine-tuning accuracy after 300 epochs with ViT-Base, alongside a systematic study of the design space.

Abstract

Masked image modeling (MIM) methods typically operate in either raw pixel space (reconstructing masked patches) or latent feature space (aligning with a pre-trained teacher). We present MEDiC (Multi-objective Exploration of Distillation from CLIP), a framework that combines both spaces in a single pipeline through three complementary objectives: patch-level token distillation from a frozen CLIP encoder, global CLS alignment, and pixel reconstruction via a lightweight decoder. We conduct a systematic investigation of the design space surrounding this multi-objective framework. First, we show that all three objectives provide complementary information, with the full combination reaching 73.9% kNN accuracy on ImageNet-1K. Second, we introduce hierarchical clustering with relative position bias for evolved masking and find that, despite producing more semantically coherent masks than prior methods, evolved masking does not outperform simple block masking in the teacher-guided distillation setting, a finding we attribute to the teacher's inherent semantic awareness. Third, we reveal that optimal scalar loss weights are extremely fragile, with small perturbations causing drops of up to 17 percentage points in kNN accuracy. Our framework achieves 73.9% kNN and 85.1% fine-tuning accuracy with ViT-Base at 300 epochs.