OmniOVCD: Streamlining Open-Vocabulary Change Detection with SAM 3

arXiv cs.CV / 4/27/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces OmniOVCD, a standalone open-vocabulary change detection (OVCD) framework for remote sensing that reduces reliance on predefined land-cover categories.
  • It leverages SAM 3’s decoupled output heads and proposes SFID (Synergistic Fusion to Instance Decoupling) to fuse semantic, instance, and presence outputs into land-cover masks and then split them into instance-level masks for comparison.
  • This approach aims to improve category recognition accuracy while preserving instance-level consistency across images, leading to more reliable change masks.
  • Experiments on four benchmarks (LEVIR-CD, WHU-CD, S2Looking, SECOND) report state-of-the-art performance with class-average IoU scores of 67.2, 66.5, 24.5, and 27.1, outperforming prior methods.
  • The authors provide an open-source implementation at the linked GitHub repository.

Abstract

Change Detection (CD) is a fundamental task in remote sensing. It monitors the evolution of land cover over time. Based on this, Open-Vocabulary Change Detection (OVCD) introduces a new requirement. It aims to reduce the reliance on predefined categories. Existing training-free OVCD methods mostly use CLIP to identify categories. These methods also need extra models like DINO to extract features. However, combining different models often causes problems in matching features and makes the system unstable. Recently, the Segment Anything Model 3 (SAM 3) is introduced. It integrates segmentation and identification capabilities within one promptable model, which offers new possibilities for the OVCD task. In this paper, we propose OmniOVCD, a standalone framework designed for OVCD. By leveraging the decoupled output heads of SAM 3, we propose a Synergistic Fusion to Instance Decoupling (SFID) strategy. SFID first fuses the semantic, instance, and presence outputs of SAM 3 to construct land-cover masks, and then decomposes them into individual instance masks for change comparison. This design preserves high accuracy in category recognition and maintains instance-level consistency across images. As a result, the model can generate accurate change masks. Experiments on four public benchmarks (LEVIR-CD, WHU-CD, S2Looking, and SECOND) demonstrate SOTA performance, achieving IoU scores of 67.2, 66.5, 24.5, and 27.1 (class-average), respectively, surpassing all previous methods. The code is available at https://github.com/Erxucomeon/OmniOVCD.