RIHA: Report-Image Hierarchical Alignment for Radiology Report Generation

arXiv cs.CV / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Radiology report generation aims to reduce radiologists’ workload and errors by producing diagnostic reports from medical images, but it struggles with aligning detailed visual features to the hierarchical structure of long reports.
  • The paper argues that many existing image-text methods treat reports as flat sequences, which limits fine-grained cross-modal alignment and reduces accuracy.
  • It introduces RIHA (Report-Image Hierarchical Alignment Transformer), an end-to-end framework that aligns radiology images with reports at multiple levels: paragraph, sentence, and word.
  • RIHA uses a Visual Feature Pyramid and a Text Feature Pyramid, connected by a Cross-modal Hierarchical Alignment module that applies optimal transport, plus Relative Positional Encoding in the decoder to improve token-level alignment.
  • Experiments on IU-Xray and MIMIC-CXR show RIHA surpasses prior state-of-the-art methods on both language generation performance and clinical efficacy metrics.

Abstract

Radiology report generation (RRG) has emerged as a promising approach to alleviate radiologists' workload and reduce human errors by automatically generating diagnostic reports from medical images. A key challenge in RRG is achieving fine-grained alignment between complex visual features and the hierarchical structure of long-form radiology reports. Although recent methods have improved image-text representation learning, they often treat reports as flat sequences, overlooking their structured sections and semantic hierarchies. This simplification hinders precise cross-modal alignment and weakens RRG accuracy. To address this challenge, we propose RIHA (Report-Image Hierarchical Alignment Transformer), a novel end-to-end framework that performs multi-level alignment between radiological images and their corresponding reports across paragraph, sentence, and word levels. This hierarchical alignment enables more precise cross-modal mapping, essential for capturing the nuanced semantics embedded in clinical narratives. Specifically, RIHA introduces a Visual Feature Pyramid (VFP) to extract multi-scale visual features and a Text Feature Pyramid (TFP) to represent multi-granularity textual structures. These components are integrated through a Cross-modal Hierarchical Alignment (CHA) module, leveraging optimal transport to effectively align visual and textual features across various levels. Furthermore, we incorporate Relative Positional Encoding (RPE) into the decoder to model spatial and semantic relationships among tokens, enhancing the token-level alignment between visual features and generated text. Extensive experiments on two benchmark chest X-ray datasets, IU-Xray and MIMIC-CXR, demonstrate that RIHA outperforms existing state-of-the-art models in both natural language generation and clinical efficacy metrics.