Retinal Layer Segmentation in OCT Images With 2.5D Cross-slice Feature Fusion Module for Glaucoma Assessment

arXiv cs.CV / 3/26/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a 2.5D retinal layer segmentation framework for OCT images aimed at improving glaucoma diagnosis and monitoring by addressing inconsistencies between adjacent B-scans.
  • It adds a novel cross-slice feature fusion (CFF) module to a U-Net-like model to capture inter-slice contextual information without the heavy compute cost of full 3D segmentation.
  • The method is designed to produce more consistent retinal boundary detection across slices, with improved robustness in noisy image regions.
  • Validation on both a clinical dataset and the public DUKE DME dataset shows improved accuracy versus baselines without the CFF module, including 8.56% lower mean absolute distance and 13.92% lower root mean square error.
  • The authors position the approach as a practical balance between contextual awareness and computational efficiency for anatomically reliable automated retinal layer delineation in potential clinical workflows.

Abstract

For accurate glaucoma diagnosis and monitoring, reliable retinal layer segmentation in OCT images is essential. However, existing 2D segmentation methods often suffer from slice-to-slice inconsistencies due to the lack of contextual information across adjacent B-scans. 3D segmentation methods are better for capturing slice-to-slice context, but they require expensive computational resources. To address these limitations, we propose a 2.5D segmentation framework that incorporates a novel cross-slice feature fusion (CFF) module into a U-Net-like architecture. The CFF module fuses inter-slice features to effectively capture contextual information, enabling consistent boundary detection across slices and improved robustness in noisy regions. The framework was validated on both a clinical dataset and the publicly available DUKE DME dataset. Compared to other segmentation methods without the CFF module, the proposed method achieved an 8.56% reduction in mean absolute distance and a 13.92% reduction in root mean square error, demonstrating improved segmentation accuracy and robustness. Overall, the proposed 2.5D framework balances contextual awareness and computational efficiency, enabling anatomically reliable retinal layer delineation for automated glaucoma evaluation and potential clinical applications.