Conformal Cross-Modal Active Learning

arXiv cs.CV / 3/25/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Conformal Cross-Modal Acquisition (CCMA), an active learning framework that leverages vision-language model knowledge to improve data-efficient learning for vision-only models.
  • CCMA uses a teacher-student design where a pretrained VLM provides semantically grounded uncertainty estimates, which are conformally calibrated to guide which samples to label.
  • The method combines multimodal conformal scoring with diversity-aware selection to choose informative and varied training examples.
  • Experiments across multiple benchmarks show CCMA consistently outperforms state-of-the-art active learning baselines, especially those relying only on uncertainty or diversity signals.

Abstract

Foundation models for vision have transformed visual recognition with powerful pretrained representations and strong zero-shot capabilities, yet their potential for data-efficient learning remains largely untapped. Active Learning (AL) aims to minimize annotation costs by strategically selecting the most informative samples for labeling, but existing methods largely overlook the rich multimodal knowledge embedded in modern vision-language models (VLMs). We introduce Conformal Cross-Modal Acquisition (CCMA), a novel AL framework that bridges vision and language modalities through a teacher-student architecture. CCMA employs a pretrained VLM as a teacher to provide semantically grounded uncertainty estimates, conformally calibrated to guide sample selection for a vision-only student model. By integrating multimodal conformal scoring with diversity-aware selection strategies, CCMA achieves superior data efficiency across multiple benchmarks. Our approach consistently outperforms state-of-the-art AL baselines, demonstrating clear advantages over methods relying solely on uncertainty or diversity metrics.