TIPSv2: Advancing Vision-Language Pretraining with Enhanced Patch-Text Alignment

arXiv cs.CV / 4/15/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies a key limitation in vision-language pretraining: models’ difficulty in aligning dense image patch representations with the corresponding text embeddings.
  • It introduces patch-level distillation, finding that a distilled student can achieve patch-text alignment that surpasses the teacher.
  • It proposes iBOT++ as an upgrade to the masked-image objective by adding loss contributions from unmasked tokens to further strengthen patch-text alignment.
  • It further improves training efficiency and effectiveness by modifying the EMA setup and adding a caption sampling strategy that leverages synthetic captions across multiple granularities.
  • The authors compile these advances into TIPSv2, reporting strong results across 9 tasks and 20 datasets, with released code and models for broad downstream use.

Abstract

Recent progress in vision-language pretraining has enabled significant improvements to many downstream computer vision applications, such as classification, retrieval, segmentation and depth prediction. However, a fundamental capability that these models still struggle with is aligning dense patch representations with text embeddings of corresponding concepts. In this work, we investigate this critical issue and propose novel techniques to enhance this capability in foundational vision-language models. First, we reveal that a patch-level distillation procedure significantly boosts dense patch-text alignment -- surprisingly, the patch-text alignment of the distilled student model strongly surpasses that of the teacher model. This observation inspires us to consider modifications to pretraining recipes, leading us to propose iBOT++, an upgrade to the commonly-used iBOT masked image objective, where unmasked tokens also contribute directly to the loss. This dramatically enhances patch-text alignment of pretrained models. Additionally, to improve vision-language pretraining efficiency and effectiveness, we modify the exponential moving average setup in the learning recipe, and introduce a caption sampling strategy to benefit from synthetic captions at different granularities. Combining these components, we develop TIPSv2, a new family of image-text encoder models suitable for a wide range of downstream applications. Through comprehensive experiments on 9 tasks and 20 datasets, we demonstrate strong performance, generally on par with or better than recent vision encoder models. Code and models are released via our project page at https://gdm-tipsv2.github.io/ .