ECHO: Efficient Chest X-ray Report Generation with One-step Block Diffusion

arXiv cs.LG / 4/13/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces ECHO, an efficient diffusion-based vision-language model (dVLM) for generating chest X-ray reports intended to reduce radiologists’ workload.
  • It tackles the latency of autoregressive VLMs and the multi-step nature of diffusion models by enabling stable one-step-per-block inference.
  • ECHO uses a Direct Conditional Distillation (DCD) framework that mitigates mean-field bias by creating unfactorized supervision from on-policy diffusion trajectories to better capture joint token dependencies.
  • A Response-Asymmetric Diffusion (RAD) training strategy is proposed to improve training efficiency while maintaining effectiveness.
  • Experiments report large gains over state-of-the-art autoregressive methods (RaTE and SemScore improvements) plus an 8× inference speedup without sacrificing clinical accuracy.

Abstract

Chest X-ray report generation (CXR-RG) has the potential to substantially alleviate radiologists' workload. However, conventional autoregressive vision--language models (VLMs) suffer from high inference latency due to sequential token decoding. Diffusion-based models offer a promising alternative through parallel generation, but they still require multiple denoising iterations. Compressing multi-step denoising to a single step could further reduce latency, but often degrades textual coherence due to the mean-field bias introduced by token-factorized denoisers. To address this challenge, we propose \textbf{ECHO}, an efficient diffusion-based VLM (dVLM) for chest X-ray report generation. ECHO enables stable one-step-per-block inference via a novel Direct Conditional Distillation (DCD) framework, which mitigates the mean-field limitation by constructing unfactorized supervision from on-policy diffusion trajectories to encode joint token dependencies. In addition, we introduce a Response-Asymmetric Diffusion (RAD) training strategy that further improves training efficiency while maintaining model effectiveness. Extensive experiments demonstrate that ECHO surpasses state-of-the-art autoregressive methods, improving RaTE and SemScore by \textbf{64.33\%} and \textbf{60.58\%} respectively, while achieving an \textbf{8\times} inference speedup without compromising clinical accuracy.