Consistency Beyond Contrast: Enhancing Open-Vocabulary Object Detection Robustness via Contextual Consistency Learning

arXiv cs.CV / 3/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that open-vocabulary object detection methods often improve cross-modal alignment (language–vision) but overlook within-modality consistency when backgrounds or environments change.
  • It proposes Contextual Consistency Learning (CCL), combining Contextual Bootstrapped Data Generation (CBDG) to synthesize data with consistent objects across varied backgrounds and Contextual Consistency Loss (CCLoss) to enforce feature invariance under environmental variation.
  • The framework targets a robustness gap where models may fail to recognize the same object identity across different scenes due to inconsistent contextual cues.
  • Experiments report state-of-the-art gains, improving by +16.3 AP on OmniLabel and +14.9 AP on D3 compared with prior approaches.
  • The authors release public code for CCL, enabling other researchers to reproduce and extend the approach.

Abstract

Recent advances in open-vocabulary object detection focus primarily on two aspects: scaling up datasets and leveraging contrastive learning to align language and vision modalities. However, these approaches often neglect internal consistency within a single modality, particularly when background or environmental changes occur. This lack of consistency leads to a performance drop because the model struggles to detect the same object in different scenes, which reveals a robustness gap. To address this issue, we introduce Contextual Consistency Learning (CCL), a novel framework that integrates two key strategies: Contextual Bootstrapped Data Generation (CBDG) and Contextual Consistency Loss (CCLoss). CBDG functions as a data generation mechanism, producing images that contain the same objects across diverse backgrounds. This is essential because existing datasets alone do not support our CCL framework. The CCLoss further enforces the invariance of object features despite environmental changes, thereby improving the model's robustness in different scenes. These strategies collectively form a unified framework for ensuring contextual consistency within the same modality. Our method achieves state-of-the-art performance, surpassing previous approaches by +16.3 AP on OmniLabel and +14.9 AP on D3. These results demonstrate the importance of enforcing intra-modal consistency, significantly enhancing model generalization in diverse environments. Our code is publicly available at: https://github.com/bozhao-li/CCL.