KG-CMI: Knowledge graph enhanced cross-Mamba interaction for medical visual question answering

arXiv cs.CV / 4/2/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces KG-CMI, a medical visual question answering framework designed to better integrate domain-specific medical knowledge rather than relying only on generic multimodal features.
  • KG-CMI combines cross-modal feature alignment, a knowledge graph embedding module, cross-modal interaction representations, and a free-form answer–enhanced multi-task learning component to handle lesion-to-diagnosis associations and open-ended answers.
  • By using a knowledge graph to connect lesion features with disease knowledge, the approach aims to improve semantic understanding beyond classification over predefined answer sets.
  • Experimental results report that KG-CMI outperforms state-of-the-art methods on VQA-RAD, SLAKE, and OVQA, and the authors include interpretability experiments to support the framework’s effectiveness.

Abstract

Medical visual question answering (Med-VQA) is a crucial multimodal task in clinical decision support and telemedicine. Recent methods fail to fully leverage domain-specific medical knowledge, making it difficult to accurately associate lesion features in medical images with key diagnostic criteria. Additionally, classification-based approaches typically rely on predefined answer sets. Treating Med-VQA as a simple classification problem limits its ability to adapt to the diversity of free-form answers and may overlook detailed semantic information in those answers. To address these challenges, we propose a knowledge graph enhanced cross-Mamba interaction (KG-CMI) framework, which consists of a fine-grained cross-modal feature alignment (FCFA) module, a knowledge graph embedding (KGE) module, a cross-modal interaction representation (CMIR) module, and a free-form answer enhanced multi-task learning (FAMT) module. The KG-CMI learns cross-modal feature representations for images and texts by effectively integrating professional medical knowledge through a graph, establishing associations between lesion features and disease knowledge. Moreover, FAMT leverages auxiliary knowledge from open-ended questions, improving the model's capability for open-ended Med-VQA. Experimental results demonstrate that KG-CMI outperforms existing state-of-the-art methods on three Med-VQA datasets, i.e., VQA-RAD, SLAKE, and OVQA. Additionally, we conduct interpretability experiments to further validate the framework's effectiveness.