VFM$^{4}$SDG: Unveiling the Power of VFMs for Single-Domain Generalized Object Detection

arXiv cs.CV / 4/24/2026

📰 NewsModels & Research

Key Points

  • The paper addresses single-domain generalized object detection (SDGOD), where domain shifts like changes in weather, lighting, and imaging severely reduce performance in unseen environments.
  • It shows via analytical experiments that degradation is mainly driven by increased missed detections, stemming from reduced cross-domain stability in both the detector encoding stage (object-background and inter-instance relations) and the decoding stage (semantic-spatial alignment of queries).
  • To tackle this, it proposes VFM$^{4}$SDG, a dual-prior learning framework that uses a frozen vision foundation model (VFM) as a transferable cross-domain stability prior for detector representation learning and query modeling.
  • The method includes Cross-domain Stable Relational Prior Distillation for more robust relational modeling during encoding, and Semantic-Contextual Prior-based Query Enhancement that injects category semantic prototypes and global visual context into queries during decoding.
  • Extensive experiments indicate VFM$^{4}$SDG achieves consistent improvements over existing state of the art on SDGOD benchmarks and DETR-based detectors, demonstrating robustness and generality.

Abstract

In real-world scenarios, continual changes in weather, illumination, and imaging conditions cause significant domain shifts, leading detectors trained on a single source domain to degrade severely in unseen environments. Existing single-domain generalized object detection (SDGOD) methods mainly rely on data augmentation or domain-invariant representation learning, but pay limited attention to detector mechanisms, leaving clear limitations under complex domain shifts. Through analytical experiments, we find that performance degradation is dominated by increasing missed detections, which fundamentally arises from reduced cross-domain stability of the detector: object-background and inter-instance relations become less stable in the encoding stage, while semantic-spatial alignment of query representations also becomes harder to maintain in the decoding stage. To this end, we propose VFM^{4}SDG, a dual-prior learning framework for SDGOD, which introduces a frozen vision foundation model (VFM) as a transferable cross-domain stability prior into detector representation learning and query modeling. In the encoding stage, we propose Cross-domain Stable Relational Prior Distillation to enhance the robustness of object-background and inter-instance relational modeling. In the decoding stage, we propose Semantic-Contextual Prior-based Query Enhancement, which injects category-level semantic prototypes and global visual context into queries to improve their semantic recognition and spatial localization stability in unseen domains. Extensive experiments show that the proposed method consistently outperforms existing SOTA methods on standard SDGOD benchmarks and two mainstream DETR-based detectors, demonstrating its effectiveness, robustness, and generality.