AI Navigate

Grounded Multimodal Retrieval-Augmented Drafting of Radiology Impressions Using Case-Based Similarity Search

arXiv cs.AI / 3/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses hallucinations in fully generative radiology report models and proposes a retrieval-augmented generation approach to ground drafts in historical reports.
  • It combines multimodal image-text embeddings, case-based similarity retrieval, and citation-constrained draft generation to ensure factual alignment.
  • It builds a multimodal retrieval database from a subset of MIMIC-CXR using CLIP for images and structured impressions for text, enabling scalable nearest-neighbor retrieval with FAISS.
  • Retrieved cases are used to build grounded prompts with safety mechanisms enforcing citation coverage and confidence-based refusal when uncertain.
  • Experimental results show that multimodal fusion improves retrieval performance (Recall@5 > 0.95) and yields interpretable, citation-traceable drafts, enhancing trust in clinical decision support.

Abstract

Automated radiology report generation has gained increasing attention with the rise of deep learning and large language models. However, fully generative approaches often suffer from hallucinations and lack clinical grounding, limiting their reliability in real-world workflows. In this study, we propose a multimodal retrieval-augmented generation (RAG) system for grounded drafting of chest radiograph impressions. The system combines contrastive image-text embeddings, case-based similarity retrieval, and citation-constrained draft generation to ensure factual alignment with historical radiology reports. A curated subset of the MIMIC-CXR dataset was used to construct a multimodal retrieval database. Image embeddings were generated using CLIP encoders, while textual embeddings were derived from structured impression sections. A fusion similarity framework was implemented using FAISS indexing for scalable nearest-neighbor retrieval. Retrieved cases were used to construct grounded prompts for draft impression generation, with safety mechanisms enforcing citation coverage and confidence-based refusal. Experimental results demonstrate that multimodal fusion significantly improves retrieval performance compared to image-only retrieval, achieving Recall@5 above 0.95 on clinically relevant findings. The grounded drafting pipeline produces interpretable outputs with explicit citation traceability, enabling improved trustworthiness compared to conventional generative approaches. This work highlights the potential of retrieval-augmented multimodal systems for reliable clinical decision support and radiology workflow augmentation