SatBLIP: Context Understanding and Feature Identification from Satellite Imagery with Vision-Language Learning

arXiv cs.CV / 4/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • SatBLIP is a satellite-specific vision-language learning framework designed to improve rural risk context understanding beyond coarse vulnerability indices.
  • The method predicts county-level Social Vulnerability Index (SVI) by combining contrastive image-text alignment with bootstrapped, satellite-semantic-aware captioning.
  • It uses GPT-4o to generate structured descriptions of satellite tiles (e.g., roof type/condition, house and yard attributes, greenery, and road context) and then fine-tunes a satellite-adapted BLIP model to caption unseen imagery.
  • The generated captions are encoded with CLIP and fused with LLM-derived embeddings via attention to estimate SVI with spatial aggregation.
  • Using SHAP, SatBLIP highlights the most influential attributes (such as roof details, street width, vegetation, and vehicles/open space), providing interpretable mappings of rural risk drivers.

Abstract

Rural environmental risks are shaped by place-based conditions (e.g., housing quality, road access, land-surface patterns), yet standard vulnerability indices are coarse and provide limited insight into risk contexts. We propose SatBLIP, a satellite-specific vision-language framework for rural context understanding and feature identification that predicts county-level Social Vulnerability Index (SVI). SatBLIP addresses limitations of prior remote sensing pipelines-handcrafted features, manual virtual audits, and natural-image-trained VLMs-by coupling contrastive image-text alignment with bootstrapped captioning tailored to satellite semantics. We use GPT-4o to generate structured descriptions of satellite tiles (roof type/condition, house size, yard attributes, greenery, and road context), then fine-tune a satellite-adapted BLIP model to generate captions for unseen images. Captions are encoded with CLIP and fused with LLM-derived embeddings via attention for SVI estimation under spatial aggregation. Using SHAP, we identify salient attributes (e.g., roof form/condition, street width, vegetation, cars/open space) that consistently drive robust predictions, enabling interpretable mapping of rural risk environments.