Robust Multispectral Semantic Segmentation under Missing or Full Modalities via Structured Latent Projection

arXiv cs.CV / 4/20/2026

📰 NewsModels & Research

Key Points

  • The paper introduces CBC-SLP, a multimodal semantic segmentation model for remote sensing that remains robust when some sensor modalities are missing due to real-world conditions.
  • Unlike prior approaches that rely on a shared representation (which can hurt performance when all modalities are present), CBC-SLP preserves both modality-invariant and modality-specific information.
  • The authors propose a structured latent projection design that transfers shared and modality-specific latent components to the decoder adaptively based on a random modality-availability mask.
  • Experiments on three multimodal remote sensing datasets show CBC-SLP outperforms state-of-the-art methods in both full-modality and missing-modality settings.
  • The method also empirically recovers complementary information that may be lost when forcing all modalities into a single shared representation.

Abstract

Multimodal remote sensing data provide complementary information for semantic segmentation, but in real-world deployments, some modalities may be unavailable due to sensor failures, acquisition issues, or challenging atmospheric conditions. Existing multimodal segmentation models typically address missing modalities by learning a shared representation across inputs. However, this approach can introduce a trade-off by compromising modality-specific complementary information and reducing performance when all modalities are available. In this paper, we tackle this limitation with CBC-SLP, a multimodal semantic segmentation model designed to preserve both modality-invariant and modality-specific information. Inspired by the theoretical results on modality alignment, which state that perfectly aligned multimodal representations can lead to sub-optimal performance in downstream prediction tasks, we propose a novel structured latent projection approach as an architectural inductive bias. Rather than enforcing this strategy through a loss term, we incorporate it directly into the architecture. In particular, to use the complementary information effectively while maintaining robustness under random modality dropout, we structure the latent representations into shared and modality-specific components and adaptively transfer them to the decoder according to the random modality availability mask. Extensive experiments on three multimodal remote sensing image sets demonstrate that CBC-SLP consistently outperforms state-of-the-art multimodal models across full and missing modality scenarios. Besides, we empirically demonstrate that the proposed strategy can recover the complementary information that may not be preserved in a shared representation. The code is available at https://github.com/iremulku/Multispectral-Semantic-Segmentation-via-Structured-Latent-Projection-CBC-SLP-.