From Pixels to Semantics: A Multi-Stage AI Framework for Structural Damage Detection in Satellite Imagery

arXiv cs.CV / 3/25/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a multi-stage AI framework for post-disaster building damage assessment from satellite imagery, combining super-resolution, object detection, and vision-language semantic reasoning.
  • It uses a Video Restoration Transformer (VRT) to upscale satellite images from 1024×1024 to 4096×4096 to reveal structural details more clearly.
  • Buildings are localized with a YOLOv11-based detector on pre-disaster imagery, then cropped regions are evaluated by vision-language models (VLMs) to classify damage into four severity levels.
  • To mitigate evaluation and bias challenges without ground-truth captions, the approach applies CLIPScore for reference-free semantic alignment and a “VLM-as-a-Jury” multi-model strategy for more robust, safety-critical decisions.
  • Experiments on xBD dataset event subsets (e.g., Moore Tornado, Hurricane Matthew) indicate improved semantic interpretation of damaged buildings and the system can generate recovery-oriented recommendations for first responders.

Abstract

Rapid and accurate structural damage assessment following natural disasters is critical for effective emergency response and recovery. However, remote sensing imagery often suffers from low spatial resolution, contextual ambiguity, and limited semantic interpretability, reducing the reliability of traditional detection pipelines. In this work, we propose a novel hybrid framework that integrates AI-based super-resolution, deep learning object detection, and Vision-Language Models (VLMs) for comprehensive post-disaster building damage assessment. First, we enhance pre- and post-disaster satellite imagery using a Video Restoration Transformer (VRT) to upscale images from 1024x1024 to 4096x4096 resolution, improving structural detail visibility. Next, a YOLOv11-based detector localizes buildings in pre-disaster imagery, and cropped building regions are analyzed using VLMs to semantically assess structural damage across four severity levels. To ensure robust evaluation in the absence of ground-truth captions, we employ CLIPScore for reference-free semantic alignment and introduce a multi-model VLM-as-a-Jury strategy to reduce individual model bias in safety-critical decision making. Experiments on subsets of the xBD dataset, including the Moore Tornado and Hurricane Matthew events, demonstrate that the proposed framework enhances the semantic interpretation of damaged buildings. In addition, our framework provides helpful recommendations to first responders for recovery based on damage analysis.