Decoupled Prototype Matching with Vision Foundation Models for Few-Shot Industrial Object Detection

arXiv cs.CV / 4/30/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles few-shot industrial object detection, where newly introduced objects have only a small number of labeled examples and maintaining large annotated datasets is costly.
  • It proposes a detection framework that uses vision foundation models to build class prototypes from few reference samples via feature extraction.
  • During inference, the method generates object regions with a segmentation model, extracts embeddings for query regions, and performs similarity matching against the stored prototypes.
  • Experiments on three industrial datasets (using the BOP benchmark’s official 2D detection protocol) show competitive results, improving average precision (AP) by 6.9% over training-free state of the art.
  • The approach supports onboarding new objects using only a few reference images, avoiding CAD models and large-scale annotation requirements, making it more practical for real industrial deployment.

Abstract

Industrial object detection systems typically rely on large annotated datasets, which are expensive to collect and challenging to maintain in industrial scenarios where the inventory of objects changes frequently. This work addresses the challenge of few-shot object detection in such industrial scenarios, where only a limited number of labeled samples are available for newly introduced objects. We present a detection framework that leverages vision foundation models to recognize objects with minimal supervision. The method constructs class prototypes from a small set of reference samples by extracting feature representations. For a given query scene during inference, object regions are generated using a segmentation model, and feature embeddings are extracted and matched with class prototypes using similarity matching. We evaluate the detection method on three established industrial datasets from the Benchmark for 6D Object Pose Estimation benchmark following the official 2D object detection evaluation protocol. We demonstrate competitive detection performance, improving AP by 6.9% compared to the state-of-the-art training-free detection methods. Furthermore, the presented method is able to onboard new objects using only a few reference images, without requiring any CAD models or large annotated datasets. These properties make the approach well-suited for real-world industrial applications.