Love Me, Love My Label: Rethinking the Role of Labels in Prompt Retrieval for Visual In-Context Learning

arXiv cs.CV / 4/7/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Visual in-context learning (VICL) performance depends heavily on selecting the right demonstrative prompts, and existing prompt retrieval methods often ignore whether prompt labels match the query labels.
  • The study finds that visually similar but label-inconsistent prompts can degrade VICL results, while stronger label consistency between query and prompts correlates with better outcomes.
  • To address this, the authors propose LaPR (Label-aware Prompt Retrieval), which builds an image-label joint representation to incorporate label cues explicitly during prompt selection.
  • LaPR also introduces a mixture-of-experts mechanism with query-adaptive routing to handle missing query labels at test time, training experts and a router using both VICL performance-guided and label-guided contrastive losses.
  • Experiments across in-context segmentation, detection, and colorization show consistent improvements over prior approaches, with good generalization across feature extractors and cross-fold settings; code is publicly available.

Abstract

Visual in-context learning (VICL) enables visual foundation models to handle multiple tasks by steering them with demonstrative prompts. The choice of such prompts largely influences VICL performance, standing out as a key challenge. Prior work has made substantial progress on prompt retrieval and reranking strategies, but mainly focuses on prompt images while overlooking labels. We reveal these approaches sometimes get visually similar but label-inconsistent prompts, which potentially degrade VICL performance. On the other hand, higher label consistency between query and prompts preferably indicates stronger VICL results. Motivated by these findings, we develop a framework named LaPR (Label-aware Prompt Retrieval), which highlights the role of labels in prompt selection. Our framework first designs an image-label joint representation for prompts to incorporate label cues explicitly. Besides, to handle unavailable query labels at test time, we introduce a mixture-of-expert mechanism to the dual encoders with query-adaptive routing. Each expert is expected to capture a specific label mode, while the router infers query-adaptive mixture weights and helps to learn label-aware representation. We carefully design alternative optimization for experts and router, with a VICL performance-guided contrastive loss and a label-guided contrastive loss, respectively. Extensive experiments show promising and consistent improvement of LaPR on in-context segmentation, detection, and colorization tasks. Moreover, LaPR generalizes well across feature extractors and cross-fold scenarios, suggesting the importance of label utilization in prompt retrieval for VICL. Code is available at https://github.com/luotc-why/CVPR26-LaPR.