Learning from Medical Entity Trees: An Entity-Centric Medical Data Engineering Framework for MLLMs

arXiv cs.CL / 4/29/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current medical data curation for multimodal large language models (MLLMs) is too coarse, missing the hierarchical and interconnected structure of clinical knowledge.
  • It introduces an Entity-Centric Medical Data Engineering framework that automatically extracts entities from authoritative literature to build a Medical Entity Tree (MET) capturing diseases, anatomy, modalities, and symptoms in one unified structure.
  • The proposed data engine uses node-guided retrieval, a two-stage hybrid filtering/alignment pipeline, and knowledge-aware data synthesis to create enriched captions and targeted reasoning-oriented VQA pairs.
  • Experiments on six medical benchmarks show that the MET-based approach substantially improves general-purpose MLLMs’ performance on complex clinical queries and yields state-of-the-art results across varied medical settings.

Abstract

Multimodal Large Language Models (MLLMs) have shown transformative potential in medical applications, yet their performance is hindered by conventional data curation strategies that rely on coarse-grained partitioning by modality or department. Such fragmented approaches fail to capture the hierarchical and interconnected nature of clinical medical knowledge, limiting the models' ability to perform fine-grained recognition and complex reasoning. In this paper, we propose a novel Entity-Centric Medical Data Engineering framework. We automatically extract entities from authoritative medical literature to construct a Medical Entity Tree (MET), a hierarchical structure that systematically encodes diseases, anatomical structures, modalities, and symptoms into a unified knowledge repository. Building upon the MET, we propose an advanced data engine that includes: (1) node-guided retrieval to anchor raw data to specific medical concepts, (2) a two-stage hybrid filtering and alignment pipeline to ensure precise visual-semantic correspondence, and (3) knowledge-aware data synthesis to generate enriched captions and targeted reasoning VQA pairs, leveraging structural constraints. Extensive evaluations across six medical benchmarks demonstrate that our approach significantly enhances the medical capabilities of general-purpose MLLMs, improving their ability to handle complex clinical queries and achieve state-of-the-art performance in diverse medical contexts.