SDDF: Specificity-Driven Dynamic Focusing for Open-Vocabulary Camouflaged Object Detection

arXiv cs.CV / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces SDDF (Specificity-Driven Dynamic Focusing), a method for open-vocabulary camouflaged object detection that targets failures caused by high visual similarity between camouflaged objects and their backgrounds.
  • It builds a new benchmark, OVCOD-D, by augmenting selected camouflaged object images with fine-grained text descriptions to support open-vocabulary evaluation.
  • The approach uses multimodal LLM-generated sub-descriptions but filters out confusing or overly decorative textual modifiers via a sub-description principal component contrastive fusion strategy.
  • It further improves discrimination using specificity-guided regional weak alignment and dynamic focusing to strengthen localization of camouflaged objects under an open-set setting.
  • On OVCOD-D, the proposed method reports an AP of 56.4, indicating effectiveness on the newly defined benchmark.

Abstract

Open-vocabulary object detection (OVOD) aims to detect known and unknown objects in the open world by leveraging text prompts. Benefiting from the emergence of large-scale vision--language pre-trained models, OVOD has demonstrated strong zero-shot generalization capabilities. However, when dealing with camouflaged objects, the detector often fails to distinguish and localize objects because the visual features of the objects and the background are highly similar. To bridge this gap, we construct a benchmark named OVCOD-D by augmenting carefully selected camouflaged object images with fine-grained textual descriptions. Due to the limited scale of available camouflaged object datasets, we adopt detectors pre-trained on large-scale object detection datasets as our baseline methods, as they possess stronger zero-shot generalization ability. In the specificity-aware sub-descriptions generated by multimodal large models, there still exist confusing and overly decorative modifiers. To mitigate such interference, we design a sub-description principal component contrastive fusion strategy that reduces noisy textual components. Furthermore, to address the challenge that the visual features of camouflaged objects are highly similar to those of their surrounding environment, we propose a specificity-guided regional weak alignment and dynamic focusing method, which aims to strengthen the detector's ability to discriminate camouflaged objects from background. Under the open-set evaluation setting, the proposed method achieves an AP of 56.4 on the OVCOD-D benchmark.