SpaAct: Spatially-Activated Transition Learning with Curriculum Adaptation for Vision-Language Navigation

arXiv cs.CV / 5/1/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that vision-language models need both backward action reasoning (“why”) and forward transition prediction (“how”) to be effectively adapted for vision-and-language navigation in unseen 3D environments.
  • It proposes SpaAct, a training framework that adds two spatial activation tasks—Action Retrospection to reconstruct executed action sequences from visual transitions, and Future Frame Selection to predict future visual transitions given history and actions.
  • SpaAct provides lightweight supervision for both reasoning and prediction objectives, helping the model build dynamic spatial awareness in a VLN-friendly way.
  • To stabilize and improve training, the authors introduce TriPA, a tri-factor progressive adaptive curriculum that moves learning from easier locomotion to more long-horizon reasoning tasks.
  • Experiments on standard VLN-CE benchmarks indicate consistent improvements and state-of-the-art performance, with plans to release code and models to enable follow-on research.

Abstract

Vision-and-Language Navigation (VLN) aims to enable an embodied agent to follow natural-language instructions and navigate to a target location in unseen 3D environments. We argue that adapting VLMs to VLN requires endowing them with two complementary capabilities for acquiring such awareness, namely backward action reasoning (why) and forward transition prediction~(how). Based on this insight, we propose SpaAct, a simple yet effective training framework that activates the dynamic spatial awareness in VLMs. Specifically, SpaAct introduces two spatial activation tasks: Action Retrospection, which asks the model to infer the executed action sequence from visual transitions, and Future Frame Selection, which forces the model to predict the visual transitions conditioned on history and action. These two objectives provide lightweight supervision on both backward action reasoning and forward transition prediction, encouraging the model to build dynamic spatial awareness in a VLM-friendly way. To further stabilize adaptation, we design TriPA, a Tri-factor Progressive Adaptive curriculum learning method that organizes training samples from easy to hard, allowing the model to gradually acquire navigation skills from basic locomotion to long-horizon reasoning. Experiments on standard VLN-CE benchmarks show that SpaAct consistently improves VLM-based navigation and achieves state-of-the-art performance. We will release the code and models to support future research.