Indexing Multimodal Language Models for Large-scale Image Retrieval

arXiv cs.CL / 4/16/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper explores using multimodal large language models as training-free similarity estimators for instance-level image-to-image retrieval by converting next-token probabilities from paired-image prompts into similarity scores.
  • It proposes a scalable large-scale retrieval workflow that combines memory-efficient indexing with top-k candidate re-ranking using the MLLM, avoiding specialized retrieval architectures and fine-tuning.
  • Experiments across multiple benchmarks show the approach can outperform task-specific re-rankers when applied outside the models’ native domains and maintains robustness against clutter, occlusion, and small objects.
  • The authors identify failure modes under severe appearance changes, suggesting limitations for open-world retrieval and directions for future research.
  • Overall, the work positions MLLMs as a promising alternative component for open-world, large-scale image retrieval pipelines.

Abstract

Multimodal Large Language Models (MLLMs) have demonstrated strong cross-modal reasoning capabilities, yet their potential for vision-only tasks remains underexplored. We investigate MLLMs as training-free similarity estimators for instance-level image-to-image retrieval. Our approach prompts the model with paired images and converts next-token probabilities into similarity scores, enabling zero-shot re-ranking within large-scale retrieval pipelines. This design avoids specialized architectures and fine-tuning, leveraging the rich visual discrimination learned during multimodal pre-training. We address scalability by combining MLLMs with memory-efficient indexing and top-k candidate re-ranking. Experiments across diverse benchmarks show that MLLMs outperform task-specific re-rankers outside their native domains and exhibit superior robustness to clutter, occlusion, and small objects. Despite strong results, we identify failure modes under severe appearance changes, highlighting opportunities for future research. Our findings position MLLMs as a promising alternative for open-world large-scale image retrieval.