Low-Data Supervised Adaptation Outperforms Prompting for Cloud Segmentation Under Domain Shift

arXiv cs.CV / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tests the common assumption behind prompting vision-language models for remote sensing: that domain-specific language can steer frozen representations toward cloud segmentation under strong domain shift.
  • Across 60 CLIPSeg prompt variants on the CloudSEN12+ benchmark, all prompting methods underperform the zero-shot baseline (0.255 mIoU), with engineered prompts as low as 0.07 mIoU.
  • Supervised fine-tuning using extremely small labeled data (0.1% ≈ 8 images) improves overall performance beyond zero-shot, and 5–10% labeled data recovers about 85% of the best achievable mIoU.
  • Full fine-tuning beats low-rank adaptation by 0.03–0.09 mIoU, with the largest improvements for spectrally ambiguous cloud classes.
  • The authors observe a “supervision dip” at 0.5–1% labeled data for ambiguous classes that can be hidden in aggregate mIoU, emphasizing the need for per-class monitoring during adaptation.

Abstract

Adapting vision-language models to remote sensing imagery presents a fundamental challenge: both the visual and linguistic distributions of satellite data lie far outside natural image pretraining corpora. Despite this, prompting remains the dominant deployment paradigm, driven by the assumption that domain-specific language can guide frozen model representations toward specialized tasks. We test this assumption directly on a domain where the mismatch is prominent: cloud segmentation for satellite imagery. Using CLIPSeg on the CloudSEN12+ cloud segmentation benchmark, we evaluate 60 prompt variants spanning simple labels, domain terminology, appearance descriptors, and contextual cues, finding that every variant underperforms the zero-shot baseline (0.255 mIoU), with engineered prompts scoring as low as 0.07 mIoU. No amount of linguistic refinement bridges the gap between CLIP's natural image representations and satellite spectral imagery. In contrast, supervised fine-tuning with just 0.1% labeled data (~8 images) surpasses zero-shot performance overall, and 5-10% data recovers ~85% of maximum achievable mIoU. Full fine-tuning consistently outperforms low-rank adaptation by 0.03-0.09 mIoU, with the largest gaps for spectrally ambiguous classes, and at 0.5 to 1% labeled data, fine-tuning temporarily degrades performance on these classes before recovering, a supervision dip that aggregate mIoU can mask. For practitioners adapting vision-language models to specialized imagery, our results deliver a clear message: labeled data is not the expensive alternative to prompting; it is the worthwhile path.