Lunguage: A Benchmark for Structured and Sequential Chest X-ray Interpretation

arXiv cs.CL / 4/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces LUNGUAGE, a benchmark dataset for structured chest X-ray report generation that supports both single-report evaluation and longitudinal (patient-level) assessment across multiple studies.
  • LUNGUAGE includes 1,473 expert-annotated chest X-ray reports and a subset of 186 with longitudinal annotations capturing disease progression and inter-study intervals, also reviewed by experts.
  • It presents a two-stage structuring framework that converts generated reports into fine-grained, schema-aligned structured reports to enable longitudinal interpretation.
  • The authors propose LUNGUAGESCORE, an interpretable evaluation metric that compares structured outputs at the entity, relation, and attribute levels while enforcing temporal consistency across patient timelines.
  • The work positions itself as the first benchmark dataset, structuring approach, and evaluation metric focused on sequential radiology reporting, with results showing LUNGUAGESCORE supports structured report evaluation effectively.

Abstract

Radiology reports convey detailed clinical observations and capture diagnostic reasoning that evolves over time. However, existing evaluation methods are limited to single-report settings and rely on coarse metrics that fail to capture fine-grained clinical semantics and temporal dependencies. We introduce LUNGUAGE, a benchmark dataset for structured radiology report generation that supports both single-report evaluation and longitudinal patient-level assessment across multiple studies. It contains 1,473 annotated chest X-ray reports, each reviewed by experts, and 186 of them contain longitudinal annotations to capture disease progression and inter-study intervals, also reviewed by experts. Using this benchmark, we develop a two-stage structuring framework that transforms generated reports into fine-grained, schema-aligned structured reports, enabling longitudinal interpretation. We also propose LUNGUAGESCORE, an interpretable metric that compares structured outputs at the entity, relation, and attribute level while modeling temporal consistency across patient timelines. These contributions establish the first benchmark dataset, structuring framework, and evaluation metric for sequential radiology reporting, with empirical results demonstrating that LUNGUAGESCORE effectively supports structured report evaluation. The code is available at: https://github.com/SuperSupermoon/Lunguage