Few Shots Text to Image Retrieval: New Benchmarking Dataset and Optimization Methods

arXiv cs.CV / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a new Few-Shot Text-to-Image Retrieval (FSIR) benchmark task to address weaknesses of pre-trained vision-language models on compositional and out-of-distribution (OOD) image-text query pairs.
  • It releases FSIR-BD, the first dataset explicitly tailored for image retrieval with text plus reference example images, covering two compositional subsets (urban scenes and nature species) and emphasizing hard negatives.
  • FSIR-BD includes 38,353 images and 303 queries, with most queries evaluated against a large test corpus (including many positives and hard negatives) and the rest used to form a few-shot reference set (FSR) of exemplar positives and hard negatives.
  • The authors propose two new retrieval optimization methods that use single-shot or few-shot reference examples from FSR and are compatible with any pre-trained image encoder.
  • Experiments show FSIR-BD is a challenging benchmark and that the proposed optimization methods improve retrieval quality over existing baselines, measured by mean Average Precision (mAP).

Abstract

Pre-trained vision-language models (VLMs) excel in multimodal tasks, commonly encoding images as embedding vectors for storage in databases and retrieval via approximate nearest neighbor search (ANNS). However, these models struggle with compositional queries and out-of-distribution (OOD) image-text pairs. Inspired by human cognition's ability to learn from minimal examples, we address this performance gap through few-shot learning approaches specifically designed for image retrieval. We introduce the Few-Shot Text-to-Image Retrieval (FSIR) task and its accompanying benchmark dataset, FSIR-BD - the first to explicitly target image retrieval by text accompanied by reference examples, focusing on the challenging compositional and OOD queries. The compositional part is divided to urban scenes and nature species, both in specific situations or with distinctive features. FSIR-BD contains 38,353 images and 303 queries, with 82% comprising the test corpus (averaging per query 37 positives, ground truth matches, and significant number of hard negatives) and 18% forming the few-shot reference corpus (FSR) of exemplar positive and hard negative images. Additionally, we propose two novel retrieval optimization methods leveraging single shot or few shot reference examples in the FSR to improve performance. Both methods are compatible with any pre-trained image encoder, making them applicable to existing large-scale environments. Our experiments demonstrate that: (1) FSIR-BD provides a challenging benchmark for image retrieval; and (2) our optimization methods outperform existing baselines as measured by mean Average Precision (mAP). Further research into FSIR optimization methods will help narrow the gap between machine and human-level understanding, particularly for compositional reasoning from limited examples.