CFCML: A Coarse-to-Fine Crossmodal Learning Framework For Disease Diagnosis Using Multimodal Images and Tabular Data

arXiv cs.CV / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a coarse-to-fine crossmodal learning (CFCML) framework to reduce the modality gap between medical images and tabular data for disease diagnosis.
  • At the coarse stage, it leverages relationships between multi-granularity image features from various encoder stages and tabular information to preliminarily narrow the modality gap.
  • At the fine stage, it generates unimodal and crossmodal prototypes with class-aware information and introduces a hierarchical anchor-based relationship mining (HRM) strategy to further extract discriminative crossmodal signals.
  • The approach uses modality samples, unimodal prototypes, and crossmodal prototypes as anchors to drive contrastive learning, enhancing inter-class disparity while reducing intra-class disparity from multiple perspectives.
  • Experiments on MEN and Derm7pt datasets show AUC improvements of 1.53% and 0.91% respectively, and the code is released at the linked GitHub repository.

Abstract

In clinical practice, crossmodal information including medical images and tabular data is essential for disease diagnosis. There exists a significant modality gap between these data types, which obstructs advancements in crossmodal diagnostic accuracy. Most existing crossmodal learning (CML) methods primarily focus on exploring relationships among high-level encoder outputs, leading to the neglect of local information in images. Additionally, these methods often overlook the extraction of task-relevant information. In this paper, we propose a novel coarse-to-fine crossmodal learning (CFCML) framework to progressively reduce the modality gap between multimodal images and tabular data, by thoroughly exploring inter-modal relationships. At the coarse stage, we explore the relationships between multi-granularity features from various image encoder stages and tabular information, facilitating a preliminary reduction of the modality gap. At the fine stage, we generate unimodal and crossmodal prototypes that incorporate class-aware information, and establish hierarchical anchor-based relationship mining (HRM) strategy to further diminish the modality gap and extract discriminative crossmodal information. This strategy utilize modality samples, unimodal prototypes, and crossmodal prototypes as anchors to develop contrastive learning approaches, effectively enhancing inter-class disparity while reducing intra-class disparity from multiple perspectives. Experimental results indicate that our method outperforms the state-of-the-art (SOTA) methods, achieving improvements of 1.53% and 0.91% in AUC metrics on the MEN and Derm7pt datasets, respectively. The code is available at https://github.com/IsDling/CFCML.