RADIO-ViPE: Online Tightly Coupled Multi-Modal Fusion for Open-Vocabulary Semantic SLAM in Dynamic Environments

arXiv cs.CV / 4/30/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • RADIO-ViPE is a new online semantic SLAM system that performs geometry-aware open-vocabulary grounding by linking natural-language queries to localized 3D regions and objects in dynamic environments.
  • Unlike prior methods that depend on calibrated, posed RGB-D inputs, it works directly from raw monocular RGB video without requiring camera intrinsics, depth sensors, or pose initialization.
  • The approach tightly couples multi-modal vision-language embeddings from agglomerative foundation models (e.g., RADIO) with geometric scene information during initialization, optimization, and factor-graph construction to improve cross-modal map consistency.
  • It uses adaptive robust kernels to handle both actively moving objects and agent-displaced scene changes (such as rearranged furniture during ego-centric sessions).
  • Experiments show state-of-the-art performance on the dynamic TUM-RGBD benchmark and competitive results versus offline open-vocabulary methods that assume calibrated sensors and mostly static scenes.

Abstract

We present RADIO-ViPE (Reduce All Domains Into One -- Video Pose Engine), an online semantic SLAM system that enables geometry-aware open-vocabulary grounding, associating arbitrary natural language queries with localized 3D regions and objects in dynamic environments. Unlike existing approaches that require calibrated, posed RGB-D input, RADIO-ViPE operates directly on raw monocular RGB video streams, requiring no prior camera intrinsics, depth sensors, or pose initialization. The system tightly couples multi-modal embeddings -- spanning vision and language -- derived from agglomerative foundation models (e.g., RADIO) with geometric scene information. This coupling takes place in initialization, optimization and factor graph connections to improve the consistency of the map from multiple modalities. The optimization is wrapped within adaptive robust kernels, designed to handle both actively moving objects and agent-displaced scene elements (e.g., furniture rearranged during ego-centric session). Experiments demonstrate that RADIO-ViPE achieves state-of-the-art results on the dynamic TUM-RGBD benchmark while maintaining competitive performance against offline open-vocabulary methods that rely on calibrated data and static scene assumptions. RADIO-ViPE bridges a critical gap in real-world deployment, enabling robust open-vocabulary semantic grounding for autonomous robotics and unconstrained in-the-wild video streams. Project page: https://be2rlab.github.io/radio_vipe