MLLM-HWSI: A Multimodal Large Language Model for Hierarchical Whole Slide Image Understanding

arXiv cs.CV / 3/25/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces MLLM-HWSI, a hierarchical multimodal large language model designed to better understand Whole Slide Images (WSIs) by modeling evidence across multiple scales rather than compressing a slide into a single embedding.
  • It aligns visual features with pathology language at four levels—cell (word), patch (phrase), region (sentence), and WSI (paragraph)—using scale-specific projection modules, hierarchical contrastive learning, and cross-scale consistency losses.
  • The method computes diagnostically relevant patches and aggregates segmented cell embeddings into compact per-patch “cellular tokens” via a lightweight Cell-Cell Attention Fusion (CCAF) transformer.
  • Projected multi-scale visual tokens are fused with text tokens and passed to an instruction-tuned LLM to support open-ended reasoning plus VQA, report generation, and captioning.
  • Trained in three stages, MLLM-HWSI reportedly achieves new state-of-the-art performance on 13 WSI-level benchmarks across six computational pathology tasks, with code released on GitHub.

Abstract

Whole Slide Images (WSIs) exhibit hierarchical structure, where diagnostic information emerges from cellular morphology, regional tissue organization, and global context. Existing Computational Pathology (CPath) Multimodal Large Language Models (MLLMs) typically compress an entire WSI into a single embedding, which hinders fine-grained grounding and ignores how pathologists synthesize evidence across different scales. We introduce \textbf{MLLM-HWSI}, a Hierarchical WSI-level MLLM that aligns visual features with pathology language at four distinct scales, cell as word, patch as phrase, region as sentence, and WSI as paragraph to support interpretable evidence-grounded reasoning. MLLM-HWSI decomposes each WSI into multi-scale embeddings with scale-specific projectors and jointly enforces (i) a hierarchical contrastive objective and (ii) a cross-scale consistency loss, preserving semantic coherence from cells to the WSI. We compute diagnostically relevant patches and aggregate segmented cell embeddings into a compact cellular token per-patch using a lightweight \textit{Cell-Cell Attention Fusion (CCAF)} transformer. The projected multi-scale tokens are fused with text tokens and fed to an instruction-tuned LLM for open-ended reasoning, VQA, report, and caption generation tasks. Trained in three stages, MLLM-HWSI achieves new SOTA results on 13 WSI-level benchmarks across six CPath tasks. By aligning language with multi-scale visual evidence, MLLM-HWSI provides accurate, interpretable outputs that mirror diagnostic workflows and advance holistic WSI understanding. Code is available at: \href{https://github.com/BasitAlawode/HWSI-MLLM}{GitHub}.