Concept-based explanations of Segmentation and Detection models in Natural Disaster Management
arXiv cs.CV / 3/25/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces an explainability framework for deep learning flood segmentation and car object detection on embedded drone platforms, aiming to improve trust in natural-disaster decision-making.
- It extends Layer-wise Relevance Propagation (LRP) to cover sigmoid-gated element-wise fusion layers in PIDNet, enabling relevance flows through the full computation graph back to the input image.
- It applies Prototypical Concept-based Explanations (PCX) to produce local and global concept-level explanations that identify which learned features drive predictions for specific disaster semantic classes.
- Experiments on a public flood dataset indicate the approach yields reliable, interpretable explanations while preserving near real-time inference suitable for resource-constrained UAV deployment.
Related Articles
The Security Gap in MCP Tool Servers (And What I Built to Fix It)
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
I made a new programming language to get better coding with less tokens.
Dev.to
RSA Conference 2026: The Week Vibe Coding Security Became Impossible to Ignore
Dev.to

Adversarial AI framework reveals mechanisms behind impaired consciousness and a potential therapy
Reddit r/artificial