Concept-based explanations of Segmentation and Detection models in Natural Disaster Management

arXiv cs.CV / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces an explainability framework for deep learning flood segmentation and car object detection on embedded drone platforms, aiming to improve trust in natural-disaster decision-making.
  • It extends Layer-wise Relevance Propagation (LRP) to cover sigmoid-gated element-wise fusion layers in PIDNet, enabling relevance flows through the full computation graph back to the input image.
  • It applies Prototypical Concept-based Explanations (PCX) to produce local and global concept-level explanations that identify which learned features drive predictions for specific disaster semantic classes.
  • Experiments on a public flood dataset indicate the approach yields reliable, interpretable explanations while preserving near real-time inference suitable for resource-constrained UAV deployment.

Abstract

Deep learning models for flood and wildfire segmentation and object detection enable precise, real-time disaster localization when deployed on embedded drone platforms. However, in natural disaster management, the lack of transparency in their decision-making process hinders human trust required for emergency response. To address this, we present an explainability framework for understanding flood segmentation and car detection predictions on the widely used PIDNet and YOLO architectures. More specifically, we introduce a novel redistribution strategy that extends Layer-wise Relevance Propagation (LRP) explanations for sigmoid-gated element-wise fusion layers. This extension allows LRP relevances to flow through the fusion modules of PIDNet, covering the entire computation graph back to the input image. Furthermore, we apply Prototypical Concept-based Explanations (PCX) to provide both local and global explanations at the concept level, revealing which learned features drive the segmentation and detection of specific disaster semantic classes. Experiments on a publicly available flood dataset show that our framework provides reliable and interpretable explanations while maintaining near real-time inference capabilities, rendering it suitable for deployment on resource-constrained platforms, such as Unmanned Aerial Vehicles (UAVs).