Beyond Textual Knowledge-Leveraging Multimodal Knowledge Bases for Enhancing Vision-and-Language Navigation

arXiv cs.CV / 3/31/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces BTK (Beyond Textual Knowledge), a vision-and-language navigation framework designed to better capture semantic cues and align them with visual observations in unseen environments.
  • BTK combines environment-specific textual knowledge with generative image knowledge bases by using Qwen3-4B to extract goal phrases, Flux-Schnell to build R2R-GP and REVERIE-GP, and BLIP-2 to create a panoramic-view-derived textual knowledge base.
  • The method integrates these multimodal knowledge bases through a Goal-Aware Augmentor and a Knowledge Augmentor to improve semantic grounding and cross-modal alignment.
  • Experiments on R2R (7,189 trajectories) and REVERIE (21,702 instructions) show BTK outperforms existing baselines on unseen test splits, with SR gains of +5% (R2R) and +2.07% (REVERIE), and SPL gains of +4% (R2R) and +3.69% (REVERIE).
  • The authors provide source code for BTK at the linked GitHub repository, supporting reproducibility and further research on multimodal knowledge augmentation for VLN.

Abstract

Vision-and-Language Navigation (VLN) requires an agent to navigate through complex unseen environments based on natural language instructions. However, existing methods often struggle to effectively capture key semantic cues and accurately align them with visual observations. To address this limitation, we propose Beyond Textual Knowledge (BTK), a VLN framework that synergistically integrates environment-specific textual knowledge with generative image knowledge bases. BTK employs Qwen3-4B to extract goal-related phrases and utilizes Flux-Schnell to construct two large-scale image knowledge bases: R2R-GP and REVERIE-GP. Additionally, we leverage BLIP-2 to construct a large-scale textual knowledge base derived from panoramic views, providing environment-specific semantic cues. These multimodal knowledge bases are effectively integrated via the Goal-Aware Augmentor and Knowledge Augmentor, significantly enhancing semantic grounding and cross-modal alignment. Extensive experiments on the R2R dataset with 7,189 trajectories and the REVERIE dataset with 21,702 instructions demonstrate that BTK significantly outperforms existing baselines. On the test unseen splits of R2R and REVERIE, SR increased by 5% and 2.07% respectively, and SPL increased by 4% and 3.69% respectively. The source code is available at https://github.com/yds3/IPM-BTK/.