Evaluating Remote Sensing Image Captions Beyond Metric Biases

arXiv cs.CV / 4/28/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper argues that remote sensing image captioning evaluation is biased by manually curated reference texts, which can hide a model’s true descriptive ability and exaggerate the need for task-specific fine-tuning.
  • It proposes ReconScore, a reference-free metric that evaluates caption quality by how well the generated text can reconstruct the original visual content, aiming to remove human annotation style bias.
  • Using ReconScore, the authors find that strong, unfine-tuned multimodal LLMs can outperform their fine-tuned counterparts on authentic zero-shot RSIC tasks, suggesting performance gaps may stem from flawed evaluation rather than capability limits.
  • Building on this, the paper introduces RemoteDescriber, a completely training-free generation method that uses ReconScore as an iterative self-correction mechanism to improve semantic precision without fine-tuning.
  • Experiments on three datasets show RemoteDescriber reaches state-of-the-art results, while the paper also assesses ReconScore’s reliability and critiques traditional captioning metrics.

Abstract

The core objective of image captioning is to achieve lossless semantic compression from visual signals into textual modalities. However, the reliance on manually curated reference texts for evaluation essentially forces models to mimic specific human annotation styles, thereby masking the true descriptive capabilities of advanced foundation models. This systemic misalignment prompts a critical question: Is task-specific fine-tuning truly necessary for Remote Sensing Image Captioning, or is the perceived performance gap merely an artifact of flawed evaluation criteria? To investigate this discrepancy, we propose ReconScore, a novel reference-free evaluation metric. Rather than computing textual similarities, we assess caption quality by its capability to reconstruct the original visual elements solely from the generated text, effectively neutralizing human annotation biases. Applying this metric, we uncover a profound, counterintuitive truth: inherently powerful, unfine-tuned MLLMs surpass their fine-tuned counterparts in authentic zero-shot RSIC tasks. Driven by this structural discovery, we introduce RemoteDescriber, a completely training-free generation methodology. By employing ReconScore as a self-correction mechanism, we iteratively refine the semantic precision of MLLM outputs without any computational fine-tuning overhead. Comprehensive experiments demonstrate that RemoteDescriber achieves state-of-the-art performance on three datasets. Furthermore, we validate ReconScore's reliability and analyze the flaws of traditional metrics. Our code is available at https://github.com/hhu-czy/RemoteDescriber.