Sentinel2Cap: A Human-Annotated Benchmark Dataset for Multimodal Remote Sensing Image Captioning

arXiv cs.CV / 5/6/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper introduces Sentinel2Cap, a human-annotated multimodal benchmark dataset for remote-sensing image captioning using Sentinel-1 SAR and Sentinel-2 multispectral patches at 10 m and 20 m resolutions.
  • Captions are manually created and validated to ensure both semantic accuracy and linguistic quality, targeting scenarios where multimodal satellite caption datasets are scarce, especially for SAR and medium-resolution sensors.
  • The authors evaluate the dataset with a zero-shot setup using Qwen3-VL-8B-Instruct across RGB, multispectral, and SAR pseudo-RGB representations to compare modality difficulty.
  • Results indicate that RGB achieves the best captioning performance, while SAR remains substantially more challenging for vision-language models.
  • The study finds that modality-specific contextual prompts improve captioning performance consistently across metrics, suggesting prompt engineering can help cross-modal remote sensing understanding.

Abstract

Image captioning has become an important task in computer vision, enabling models to generate natural language descriptions of visual content. While several datasets exist for natural images and high-resolution optical remote sensing imagery, the availability of captioning datasets for multimodal satellite data remains limited, particularly for SAR imagery and medium-resolution sensors. We introduce Sentinel2Cap, a human-annotated multimodal captioning dataset containing Sentinel-1 SAR and Sentinel-2 multi-spectral image patches at 10 m and 20 m spatial resolution with diverse land cover compositions. Captions are created manually and carefully validated to ensure both semantic accuracy and linguistic quality. To evaluate Sentinel2Cap, we perform a zero-shot captioning using the Qwen3-VL-8B-Instruct model across three image modalities: RGB, multi-spectral, and SAR pseudo-RGB representations. Results show that RGB images achieve the highest captioning performance, while SAR images remain more challenging for vision-language models. Providing modality-specific contextual prompts consistently improves performance across all metrics. These findings highlight both the challenges of multimodal remote sensing image captioning and the potential value of human-annotated datasets for advancing research in cross-modal scene understanding. All the material is publicly avaiable.