Better with Less: Tackling Heterogeneous Multi-Modal Image Joint Pretraining via Conditioned and Degraded Masked Autoencoder

arXiv cs.CV / 4/21/2026

📰 NewsModels & Research

Key Points

  • The paper addresses the difficulty of joint pretraining for heterogeneous high-resolution optical and SAR (synthetic aperture radar) images, focusing on a “heterogeneity–resolution paradox” that causes negative transfer when models use rigid alignment.
  • It introduces CoDe-MAE, a “better synergy with less alignment” approach that combines multiple techniques to avoid either feature suppression or feature contamination.
  • Optical-anchored Knowledge Distillation (OKD) regularizes SAR by mapping speckle/noise toward a cleaner semantic manifold, improving robustness.
  • Conditioned Contrastive Learning (CCL) aligns only the shared consensus using a gradient buffering mechanism while preserving meaningful physical differences between modalities.
  • Cross-Modal Degraded Reconstruction (CDR) removes non-homologous spectral pseudo-features to make the learning target more well-posed and to capture structural invariants; with 1M pretraining samples, the method claims state-of-the-art results and strong data efficiency versus larger scaled foundation models.

Abstract

Learning robust representations across extremely heterogeneous modalities remains a fundamental challenge in multi-modal vision. As a critical and profound instantiation of this challenge, high-resolution (HR) joint optical and synthetic aperture radar (SAR) pretraining seeks modality synergy to mutually enhance single-source representations; its potential is severely hindered by the Heterogeneity-Resolution Paradox: finer spatial scales drastically amplify the physical divergence between complex radar geometries and non-homologous optical textures. Consequently, migrating medium-resolution-oriented rigid alignment paradigms to HR scenarios triggers either severe feature suppression to force equivalence, or feature contamination driven by extreme epistemic uncertainty. Both extremes inevitably culminate in profound representation degradation and negative transfer. To overcome this bottleneck, we propose CoDe-MAE, pioneering a \textit{better synergy with less alignment} philosophy. First, Optical-anchored Knowledge Distillation (OKD) implicitly regularizes SAR's speckle noise by mapping it into a pure semantic manifold. Building on this, Conditioned Contrastive Learning (CCL) utilizes a gradient buffering mechanism to align shared consensus while safely preserving divergent physical signatures. Concurrently, Cross-Modal Degraded Reconstruction (CDR) deliberately strips non-homologous spectral pseudo-features, truncating the inherently ill-posed mapping to capture true structural invariants. Extensive analyses validate our theoretical claims. Pretrained on 1M samples, CoDe-MAE demonstrates remarkable data efficiency, successfully preventing representation degradation and establishing new state-of-the-art performance across diverse single- and bi-modal downstream tasks, substantially outperforming foundation models scaled on vastly larger datasets.