The Detector Teaches Itself: Lightweight Self-Supervised Adaptation for Open-Vocabulary Object Detection

arXiv cs.CV / 5/6/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper targets open-vocabulary object detection, where a vision-language model (VLM) is paired with a detector for zero-shot recognition of novel categories.
  • It argues that VLMs pre-trained on full images do not capture local object details well for region-level detection, motivating a dedicated adaptation method.
  • The proposed Decoupled Adaptivity Training (DAT) builds a region-aware pseudo-labeled dataset using a closed-set detector, then self-supervised fine-tunes the VLM’s visual backbone to better align local features while retaining global semantics.
  • DAT is designed as a plug-and-play module with no inference-time overhead and tunes fewer than 0.8M parameters, making it lightweight to integrate.
  • Experiments on COCO and LVIS show consistent improvements on both novel and known categories, reportedly setting a new state of the art for cooperative open-vocabulary detection.

Abstract

Open-vocabulary object detection aims to recognize objects from an open set of categories, which leverages vision-language models (VLMs) pre-trained on large-scale image-text data. The cooperative paradigm combines an object detector with a VLM to achieve zero-shot recognition of novel objects. However, VLMs pre-trained on full images often struggle to capture local object details, limiting their effectiveness when applied to region-level detection. We present Decoupled Adaptivity Training (DAT), a self-supervised fine-tuning approach to improve VLMs for cooperative model-based object detection. Given a cooperative model consists of a closed-set detector and a VLM, we first construct a region-aware pseudo-labeled dataset using a pre-trained closed-set object detector, in which regions corresponding to novel objects may be present but remain unlabeled or mislabeled. We then fine-tune the visual backbone of the VLM in a decoupled manner, which enhances local feature alignment while preserving global semantic knowledge via weight interpolation. DAT is a plug-and-play module that requires no inference overhead and fine-tunes less than 0.8M parameters. Experiments on the COCO and LVIS datasets show that DAT consistently improves detection performance on both novel and known categories, establishing a new state of the art in cooperative open-vocabulary detection.