AI Navigate

On the Cone Effect and Modality Gap in Medical Vision-Language Embeddings

arXiv cs.LG / 3/19/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes the cone effect and modality gap in medical vision-language embeddings and introduces a lightweight post-hoc mechanism that freezes pretrained encoders while jointly controlling cross-modal separation with a single hyperparameter (lambda).
  • This approach enables systematic study of how the modality gap impacts downstream multimodal performance without costly retraining, evaluated on both generalist (CLIP, SigLIP) and medical-specialized (BioMedCLIP, MedSigLIP) models.
  • Results show that reducing excessive modality gap generally improves performance, with medical datasets showing stronger sensitivity to gap modulation, but complete collapse is not universally optimal and intermediate separation often yields the best results.
  • The findings position the modality gap as a tunable property of multimodal representations, guiding task- and domain-specific tuning rather than pursuing universal minimization.

Abstract

Vision-Language Models (VLMs) exhibit a characteristic "cone effect" in which nonlinear encoders map embeddings into highly concentrated regions of the representation space, contributing to cross-modal separation known as the modality gap. While this phenomenon has been widely observed, its practical impact on supervised multimodal learning -particularly in medical domains- remains unclear. In this work, we introduce a lightweight post-hoc mechanism that keeps pretrained VLM encoders frozen while continuously controlling cross-modal separation through a single hyperparameter {{\lambda}}. This enables systematic analysis of how the modality gap affects downstream multimodal performance without expensive retraining. We evaluate generalist (CLIP, SigLIP) and medically specialized (BioMedCLIP, MedSigLIP) models across diverse medical and natural datasets in a supervised multimodal settings. Results consistently show that reducing excessive modality gap improves downstream performance, with medical datasets exhibiting stronger sensitivity to gap modulation; however, fully collapsing the gap is not always optimal, and intermediate, task-dependent separation yields the best results. These findings position the modality gap as a tunable property of multimodal representations rather than a quantity that should be universally minimized.