GPAFormer: Graph-guided Patch Aggregation Transformer for Efficient 3D Medical Image Segmentation
arXiv cs.CV / 4/9/2026
📰 NewsSignals & Early TrendsModels & Research
Key Points
- The paper introduces GPAFormer, a lightweight transformer-based architecture aimed at efficient and accurate 3D medical image segmentation across multiple modalities and organs.
- GPAFormer’s design centers on two modules: MASA (multi-scale attention-guided stacked aggregation) for handling structures at different sizes, and MPGA (mutual-aware patch graph aggregator) for graph-guided aggregation using patch feature similarity and spatial adjacency.
- Experiments on public whole-body CT/MRI datasets (BTCV, Synapse, ACDC, BraTS) report state-leading segmentation performance while using only 1.81M parameters.
- The reported accuracy includes DSC improvements such as 75.70% on BTCV, 81.20% on Synapse, 89.32% on ACDC, and 82.74% on BraTS, indicating strong balance between performance and compactness.
- The method is presented as practical for real settings, with sub-second inference on a consumer GPU for a validation case in BTCV, targeting resource-constrained clinical environments.
Related Articles

Black Hat Asia
AI Business

Amazon CEO takes aim at Nvidia, Intel, Starlink, more in annual shareholder letter
TechCrunch

Why Anthropic’s new model has cybersecurity experts rattled
Reddit r/artificial
Does the AI 2027 paper still hold any legitimacy?
Reddit r/artificial

Frontend Engineers Are Becoming AI Trainers
Dev.to