Graph-based Semantic Calibration Network for Unaligned UAV RGBT Image Semantic Segmentation and A Large-scale Benchmark

arXiv cs.CV / 4/30/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper proposes GSCNet, a graph-based semantic calibration network to improve unaligned UAV RGB-T (RGBB/Thermal) image semantic segmentation under cross-modal spatial misalignment and fine-grained semantic confusion.
  • It introduces a Feature Decoupling and Alignment Module (FDAM) that separates modality features into shared structural and private perceptual components and performs deformable alignment in the shared space to reduce appearance interference.
  • It also presents a Semantic Graph Calibration Module (SGCM) that encodes hierarchical category taxonomy and co-occurrence regularities as a structured category graph, using graph-attention reasoning to better calibrate visually similar and rare classes.
  • The authors release the URTF benchmark, reportedly the largest fine-grained dataset for unaligned UAV RGB-T segmentation, with 25,000+ image pairs across 61 categories exhibiting realistic cross-modal misalignment, and show GSCNet achieves strong improvements over existing methods.

Abstract

Fine-grained RGBT image semantic segmentation is crucial for all-weather unmanned aerial vehicle (UAV) scene understanding. However, UAV RGBT semantic segmentation faces two coupled challenges: cross-modal spatial misalignment caused by sensor parallax and platform vibration, and severe semantic confusion among fine-grained ground objects under top-down aerial views. To address these issues, we propose a Graph-based Semantic Calibration Network (GSCNet) for unaligned UAV RGBT image semantic segmentation. Specifically, we design a Feature Decoupling and Alignment Module (FDAM) that decouples each modality into shared structural and private perceptual components and performs deformable alignment in the shared subspace, enabling robust spatial correction with reduced modality appearance interference. Moreover, we propose a Semantic Graph Calibration Module (SGCM) that explicitly encodes the hierarchical taxonomy and co-occurrence regularities among ground-object categories in UAV scenes into a structured category graph, and incorporates these priors into graph-attention reasoning to calibrate predictions of visually similar and rare categories.In addition, we construct the Unaligned RGB-Thermal Fine-grained (URTF) benchmark, to the best of our knowledge, the largest and most fine-grained benchmark for unaligned UAV RGBT image semantic segmentation, containing over 25,000 image pairs across 61 categories with realistic cross-modal misalignment. Extensive experiments on URTF demonstrate that GSCNet significantly outperforms state-of-the-art methods, with notable gains on fine-grained categories. The dataset is available at https://github.com/mmic-lcl/Datasets-and-benchmark-code.