REALM: An RGB and Event Aligned Latent Manifold for Cross-Modal Perception

arXiv cs.AI / 5/4/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • REALM is a cross-modal learning framework that projects event-camera representations into the pretrained latent space of RGB foundation models to improve modality generalization.
  • By using low-rank adaptation (LoRA) rather than task-specific training, REALM bridges the gap between RGB and event streams while leveraging the geometric and semantic priors of frozen RGB backbones.
  • The approach is designed to map events into a ViT-based foundation latent space and supports downstream tasks such as depth estimation and semantic segmentation via transferable linear heads.
  • REALM’s key capability is zero-shot reuse of complex, image-trained decoders (e.g., MASt3R) directly on raw event data, avoiding retraining for event inputs.
  • The paper reports state-of-the-art performance on wide-baseline feature matching, outperforming specialized event-processing architectures, with code/models planned for release after acceptance.

Abstract

Event cameras provide several unique advantages over standard frame-based sensors, including high temporal resolution, low latency, and robustness to extreme lighting. However, existing learning-based approaches for event processing are typically confined to narrow, task-specific silos and lack the ability to generalize across modalities. We address this gap with REALM, a cross-modal framework that learns an RGB and Event Aligned Latent Manifold by projecting event representations into the pretrained latent space of RGB foundation models. Instead of task-specific training, we leverage low-rank adaptation (LoRA) to bridge the modality gap, effectively unlocking the geometric and semantic priors of frozen RGB backbones for asynchronous event streams. We demonstrate that REALM effectively maps events into the ViT-based foundation latent space. Our method allows us to perform downstream tasks like depth estimation and semantic segmentation by simply transferring linear heads trained on the RGB teacher. Most significantly, REALM enables the direct, zero-shot application of complex, frozen image-trained decoders, such as MASt3R, to raw event data. We demonstrate state-of-the-art performance in wide-baseline feature matching, significantly outperforming specialized architectures. Code and models are available upon acceptance.