AI Navigate

Reevaluating the Intra-Modal Misalignment Hypothesis in CLIP

arXiv cs.CV / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper reevaluates the intra-modal misalignment hypothesis in CLIP, arguing there are no extra degrees of freedom for image embedding distances.
  • It shows that language-image trained models (CLIP, SigLIP) and image-image trained models (DINO, SigLIP2) yield similar empirical indicators, challenging a CLIP-specific misalignment story.
  • Experimental results on intra-modal tasks like retrieval and few-shot classification indicate that addressing task ambiguity—not supposed misalignment—drives performance.
  • The work prompts a rethink of the theoretical arguments and measurement indicators used to defend the intra-modal misalignment hypothesis.

Abstract

Recent research suggested that the embeddings produced by CLIP-like contrastive language-image training are suboptimal for image-only tasks. The main theory is that the inter-modal (language-image) alignment loss ignores intra-modal (image-image) alignment, leading to poorly calibrated distances between images. In this study, we question this intra-modal misalignment hypothesis. We reexamine its foundational theoretical argument, the indicators used to support it, and the performance metrics affected. For the theoretical argument, we demonstrate that there are no such supposed degrees of freedom for image embedding distances. For the empirical measures, our findings reveal they yield similar results for language-image trained models (CLIP, SigLIP) and image-image trained models (DINO, SigLIP2). This indicates the observed phenomena do not stem from a misalignment specific to the former. Experiments on the commonly studied intra-modal tasks retrieval and few-shot classification confirm that addressing task ambiguity, not supposed misalignment, is key for best results.