AI Navigate

Multi-Modal Multi-Agent Reinforcement Learning for Radiology Report Generation: Radiologist-Like Workflow with Clinically Verifiable Rewards

arXiv cs.LG / 3/19/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • MARL-Rad proposes a multi-modal, multi-agent reinforcement learning framework for radiology report generation that coordinates region-specific agents with a global integrating agent.
  • The system is trained jointly and optimized via clinically verifiable rewards, avoiding single-model RL or post-hoc agentization of independent models.
  • Evaluations on the MIMIC-CXR and IU X-ray datasets show MARL-Rad achieves state-of-the-art clinically efficacy (CE) performance using metrics such as RadGraph, CheXbert, and GREEN.
  • Additional analyses indicate MARL-Rad enhances laterality consistency and produces more accurate, detail-informed radiology reports.

Abstract

We propose MARL-Rad, a novel multi-modal multi-agent reinforcement learning framework for radiology report generation that coordinates region-specific agents and a global integrating agent, optimized via clinically verifiable rewards. Unlike prior single-model reinforcement learning or post-hoc agentization of independently trained models, our method jointly trains multiple agents and optimizes the entire agent system through reinforcement learning. Experiments on the MIMIC-CXR and IU X-ray datasets show that MARL-Rad consistently improves clinically efficacy (CE) metrics such as RadGraph, CheXbert, and GREEN scores, achieving state-of-the-art CE performance. Further analyses confirm that MARL-Rad enhances laterality consistency and produces more accurate, detail-informed reports.