Assessing Multimodal Chronic Wound Embeddings with Expert Triplet Agreement

arXiv cs.CV / 4/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that off-the-shelf foundation models fail to reliably capture clinically meaningful features for heterogeneous, long-tail recessive dystrophic epidermolysis bullosa (RDEB), making expert-aligned evaluation difficult.
  • It proposes a way to assess multimodal embedding spaces using fast expert triplet (ordinal) agreement judgments, leveraging implicit clinical similarity knowledge.
  • The authors introduce TriDerm, a multimodal framework that learns interpretable wound representations from small cohorts by combining wound imagery, boundary masks, and expert reports.
  • TriDerm adapts visual foundation models with wound-level attention pooling and non-contrastive representation learning, while text representations are derived via LLM-driven comparison queries and soft ordinal embeddings (SOE).
  • Across modalities, the fused visual+text approach achieves 73.5% expert agreement, improving over the best off-the-shelf single-modality foundation model by more than 5.6 percentage points, and the tool/code/dataset samples are released publicly.

Abstract

Recessive dystrophic epidermolysis bullosa (RDEB) is a rare genetic skin disorder for which clinicians greatly benefit from finding similar cases using images and clinical text. However, off-the-shelf foundation models do not reliably capture clinically meaningful features for this heterogeneous, long-tail disease, and structured measurement of agreement with experts is challenging. To address these gaps, we propose evaluating embedding spaces with expert ordinal comparisons (triplet judgments), which are fast to collect and encode implicit clinical similarity knowledge. We further introduce TriDerm, a multimodal framework that learns interpretable wound representations from small cohorts by integrating wound imagery, boundary masks, and expert reports. On the vision side, TriDerm adapts visual foundation models to RDEB using wound-level attention pooling and non-contrastive representation learning. For text, we prompt large language models with comparison queries and recover medically meaningful representations via soft ordinal embeddings (SOE). We show that visual and textual modalities capture complementary aspects of wound phenotype, and that fusing both modalities yields 73.5% agreement with experts, outperforming the best off-the-shelf single-modality foundation model by over 5.6 percentage points. We make the expert annotation tool, model code and representative dataset samples publicly available.