OptiSAR-Net++: A Large-Scale Benchmark and Transformer-Free Framework for Cross-Domain Remote Sensing Visual Grounding

arXiv cs.CV / 3/27/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Cross-Domain Remote Sensing Visual Grounding (CD-RSVG) to localize targets using natural language across sensor domains (e.g., optical vs SAR), which prior methods largely could not handle.
  • It builds what it claims is the first large-scale benchmark dataset for this setting (OptSAR-RSVG) and evaluates on OptSAR-RSVG and DIOR-RSVG.
  • OptiSAR-Net++ is proposed as a transformer-free framework, using patch-level Low-Rank Adaptation Mixture-of-Experts (PL-MoE) to efficiently decouple and model cross-domain features.
  • To avoid the computational cost of transformer decoding, the method shifts to a CLIP-style contrastive cross-modal matching approach with dynamic adversarial negative sampling.
  • Additional components (text-guided dual-gate fusion and a region-aware auxiliary head) are added to improve semantic-visual alignment and spatial modeling, achieving state-of-the-art localization accuracy and efficiency, with code/data planned for public release.

Abstract

Remote sensing visual grounding (RSVG) aims to localize specific targets in remote sensing images using natural language expressions. However, existing methods are restricted to single-sensor domains, i.e., either optical or synthetic aperture radar (SAR), limiting their real-world applicability. In this paper, we introduce the Cross-Domain RSVG (CD-RSVG) task and construct OptSAR-RSVG, the first large-scale benchmark dataset for this setting. To tackle the challenges of cross-domain feature modeling, computational inefficiency, and fine-grained semantic discrimination, we propose OptiSAR-Net++. Our framework features a patch-level Low-Rank Adaptation Mixture of Experts (PL-MoE) for efficient cross-domain feature decoupling. To mitigate the substantial computational overhead of Transformer decoding frameworks, we adopt a CLIP-based contrastive paradigm and further incorporate dynamic adversarial negative sampling, thereby transforming generative regression into an efficient cross-modal matching process. Additionally, a text-guided dual-gate fusion module (TGDF-SSA) and a region-aware auxiliary head are introduced to enhance semantic-visual alignment and spatial modeling. Extensive experiments demonstrate that OptiSAR-Net++ achieves SOTA performance on both OptSAR-RSVG and DIOR-RSVG benchmarks, offering significant advantages in localization accuracy and efficiency. Our code and dataset will be made publicly available.
広告