DGRNet: Disagreement-Guided Refinement for Uncertainty-Aware Brain Tumor Segmentation

arXiv cs.CV / 3/24/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces DGRNet, a framework for brain tumor MRI segmentation that targets two gaps in current deep learning methods: unreliable uncertainty estimates and limited use of radiology report text.
  • DGRNet uses a shared encoder-decoder with four lightweight view-specific adapters to produce diverse predictions in a single forward pass, enabling multi-view disagreement-based uncertainty quantification.
  • It builds disagreement maps to locate high-uncertainty regions and then selectively refines the segmentation using text-conditioned guidance from clinical reports.
  • A diversity-preserving training approach (pairwise similarity penalties and gradient isolation) is proposed to prevent view collapse and maintain prediction diversity.
  • Experiments on the TextBraTS dataset report improved performance over prior state of the art, with +2.4% Dice and an 11% reduction in HD95, alongside uncertainty outputs described as meaningful for deployment.

Abstract

Accurate brain tumor segmentation from MRI scans is critical for diagnosis and treatment planning. Despite the strong performance of recent deep learning approaches, two fundamental limitations remain: (1) the lack of reliable uncertainty quantification in single-model predictions, which is essential for clinical deployment because the level of uncertainty may impact treatment decision-making, and (2) the under-utilization of rich information in radiology reports that can guide segmentation in ambiguous regions. In this paper, we propose the Disagreement-Guided Refinement Network (DGRNet), a novel framework that addresses both limitations through multi-view disagreement-based uncertainty estimation and text-conditioned refinement. DGRNet generates diverse predictions via four lightweight view-specific adapters attached to a shared encoder-decoder, enabling efficient uncertainty quantification within a single forward pass. Afterward, we build disagreement maps to identify regions of high segmentation uncertainty, which are then selectively refined according to clinical reports. Moreover, we introduce a diversity-preserving training strategy that combines pairwise similarity penalties and gradient isolation to prevent view collapse. The experimental results on the TextBraTS dataset show that DGRNet favorably improves state-of-the-art segmentation accuracy by 2.4% and 11% in main metrics Dice and HD95, respectively, while providing meaningful uncertainty estimates.