AI Navigate

Cross-Modal Rationale Transfer for Explainable Humanitarian Classification on Social Media

arXiv cs.CL / 3/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • We propose an interpretable-by-design multimodal classification framework that jointly learns text and image representations with a visual-language transformer and extracts text rationales to explain predictions.
  • The method introduces cross-modal rationale transfer, learning image rationales by mapping from text rationales to reduce annotation effort.
  • On CrisisMMD, it boosts Macro-F1 by 2-35% and achieves 80% accuracy in zero-shot mode, while producing text rationales and image patches as explanations.
  • Human evaluation reports about 12% improvements in retrieved image rationale patches, aiding identification of humanitarian categories.

Abstract

Advances in social media data dissemination enable the provision of real-time information during a crisis. The information comes from different classes, such as infrastructure damages, persons missing or stranded in the affected zone, etc. Existing methods attempted to classify text and images into various humanitarian categories, but their decision-making process remains largely opaque, which affects their deployment in real-life applications. Recent work has sought to improve transparency by extracting textual rationales from tweets to explain predicted classes. However, such explainable classification methods have mostly focused on text, rather than crisis-related images. In this paper, we propose an interpretable-by-design multimodal classification framework. Our method first learns the joint representation of text and image using a visual language transformer model and extracts text rationales. Next, it extracts the image rationales via the mapping with text rationales. Our approach demonstrates how to learn rationales in one modality from another through cross-modal rationale transfer, which saves annotation effort. Finally, tweets are classified based on extracted rationales. Experiments are conducted over CrisisMMD benchmark dataset, and results show that our proposed method boosts the classification Macro-F1 by 2-35% while extracting accurate text tokens and image patches as rationales. Human evaluation also supports the claim that our proposed method is able to retrieve better image rationale patches (12%) that help to identify humanitarian classes. Our method adapts well to new, unseen datasets in zero-shot mode, achieving an accuracy of 80%.