Revisiting Human-in-the-Loop Object Retrieval with Pre-Trained Vision Transformers

arXiv cs.CV / 4/2/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper revisits Human-in-the-Loop Object Retrieval, aiming to find diverse images of a user-specified object category from a large unlabeled collection using only the initial query and iterative relevance feedback, without pre-existing labels.
  • It frames interactive retrieval as an active learning-based binary classification problem, where the system selects informative samples each iteration to be annotated by a user and progressively improves relevance discrimination.
  • The work highlights the added difficulty of multi-object, cluttered scenes, where the target may occupy only a small region and therefore requires localized, instance-aware representations rather than purely global descriptors.
  • By leveraging pre-trained Vision Transformer (ViT) representations, the authors explore design choices such as which object instances to consider per image, annotation forms, active sample selection strategy, and representation methods balancing global context vs fine-grained local details.
  • Experiments on multi-object datasets compare multiple representation strategies and provide practical guidance for building effective interactive object retrieval pipelines driven by active learning.

Abstract

Building on existing approaches, we revisit Human-in-the-Loop Object Retrieval, a task that consists of iteratively retrieving images containing objects of a class-of-interest, specified by a user-provided query. Starting from a large unlabeled image collection, the aim is to rapidly identify diverse instances of an object category relying solely on the initial query and the user's Relevance Feedback, with no prior labels. The retrieval process is formulated as a binary classification task, where the system continuously learns to distinguish between relevant and non-relevant images to the query, through iterative user interaction. This interaction is guided by an Active Learning loop: at each iteration, the system selects informative samples for user annotation, thereby refining the retrieval performance. This task is particularly challenging in multi-object datasets, where the object of interest may occupy only a small region of the image within a complex, cluttered scene. Unlike object-centered settings where global descriptors often suffice, multi-object images require more adapted, localized descriptors. In this work, we formulate and revisit the Human-in-the-Loop Object Retrieval task by leveraging pre-trained ViT representations, and addressing key design questions, including which object instances to consider in an image, what form the annotations should take, how Active Selection should be applied, and which representation strategies best capture the object's features. We compare several representation strategies across multi-object datasets highlighting trade-offs between capturing the global context and focusing on fine-grained local object details. Our results offer practical insights for the design of effective interactive retrieval pipelines based on Active Learning for object class retrieval.