Exploring Hierarchical Consistency and Unbiased Objectness for Open-Vocabulary Object Detection

arXiv cs.CV / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper targets the limitations of open-vocabulary object detection (OVD), which typically relies on vision-language models to create pseudo labels for novel classes but can misassign labels and produce unreliable objectness scores.
  • It introduces a hierarchical confidence calibration (HCC) method that improves class label estimation by enforcing consistency across hierarchical semantic levels (class, super-category, and sub-category).
  • It proposes LoCLIP, a parameter-efficient adaptation of CLIP that adds an objectness token to reduce bias toward base classes in region proposal networks (RPNs) and better estimate objectness for novel categories.
  • Experiments on major OVD benchmarks such as COCO and LVIS show the approach achieves new state-of-the-art performance, indicating strong effectiveness across standard evaluation settings.

Abstract

Conventional object detectors typically operate under a closed-set assumption, limiting recognition to a predefined set of base classes seen during training. Open-vocabulary object detection (OVD) addresses this limitation by leveraging vision-language models (VLMs) to generate pseudo labels for novel object classes. However, existing OVD methods suffer from two critical drawbacks: (1) inaccurate class label assignments, as VLMs are optimized for image-level predictions rather than the region-level predictions required for pseudo labeling, and (2) unreliable objectness scores from region proposal networks (RPNs) trained exclusively on base object classes. To address these issues, we propose a novel pseudo labeling framework for OVD. Our approach introduces a hierarchical confidence calibration (HCC) technique, which ensures reliable class label estimation by assessing consistency across hierarchical semantic levels (class, super- and sub-category). We also present LoCLIP, a parameter-efficient adaptation of CLIP that incorporates an objectness token to mitigate base class bias problem of RPNs and provide reliable objectness estimations for novel object classes. Extensive experiments on standard OVD benchmarks, including COCO and LVIS, demonstrate that our approach clearly sets a new state of the art, validating the effectiveness of our approach. Project site: https://cvlab.yonsei.ac.kr/projects/HCC