MemOVCD: Training-Free Open-Vocabulary Change Detection via Cross-Temporal Memory Reasoning and Global-Local Adaptive Rectification

arXiv cs.CV / 4/30/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • MemOVCD is a training-free, open-vocabulary change detection method for bi-temporal remote sensing that identifies semantic changes without relying on predefined categories.
  • The approach improves temporal coupling by reframing change detection as a two-frame tracking problem and using weighted bidirectional propagation to combine semantic evidence from both time directions.
  • To handle large temporal gaps, it introduces histogram-aligned transition frames that smooth abrupt appearance shifts and stabilize cross-temporal memory propagation.
  • For better spatial coherence on high-resolution images, it applies global-local adaptive rectification to fuse global and local predictions, reducing fragmentation while retaining fine details.
  • Experiments on five benchmarks show strong performance across two change-detection tasks, indicating improved generalization across diverse open-vocabulary settings.

Abstract

Open-vocabulary change detection aims to identify semantic changes in bi-temporal remote sensing images without predefined categories. Recent methods combine foundation models such as SAM, DINO and CLIP, but typically process each timestamp independently or interact only at the final comparison stage. Such paradigms suffer from insufficient temporal coupling during semantic reasoning, which limits their ability to distinguish genuine semantic changes from non-semantic appearance discrepancies. In addition, patch-dominant inference on high-resolution images often weakens global semantic continuity and produces fragmented change regions. To address these issues, we propose MemOVCD, a training-free open-vocabulary change detection framework based on cross-temporal memory reasoning and global-local adaptive rectification. Specifically, we reformulate bi-temporal change detection as a two-frame tracking problem and introduce weighted bidirectional propagation to aggregate semantic evidence from both temporal directions. To stabilize memory propagation across large temporal gaps, we construct histogram-aligned transition frames to smooth abrupt appearance changes. Moreover, a global-local adaptive rectification strategy adaptively fuses local and global-view predictions, improving spatial consistency while preserving fine-grained details. Experiments on five benchmarks demonstrate that MemOVCD achieves favorable performance on two change detection tasks, validating its effectiveness and generalization under diverse open-vocabulary settings.