ImagineNav++: Prompting Vision-Language Models as Embodied Navigator through Scene Imagination

arXiv cs.RO / 5/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • ImagineNav++ investigates whether vision-language models (VLMs) can perform mapless visual navigation for home-assistance robots using only onboard RGB/RGB-D streams, addressing limitations of text-only planning.
  • The framework generates “imagined” future observation images from candidate robot viewpoints, turning navigation planning into a best-view selection problem solved by the VLM using visual prompts.
  • A future-view imagination module is used to produce semantically meaningful viewpoints that reflect navigation preferences and have high exploration potential.
  • To keep spatial reasoning consistent over time, ImagineNav++ introduces selective foveation memory that hierarchically integrates keyframe observations via a sparse-to-dense approach, enabling compact long-term spatial memory.
  • Experiments on open-vocabulary object and instance navigation benchmarks report state-of-the-art performance in mapless settings, with results that can outperform many map-based methods.

Abstract

Visual navigation is a fundamental capability for autonomous home-assistance robots, enabling long-horizon tasks such as object search. While recent methods have leveraged Large Language Models (LLMs) to incorporate commonsense reasoning and improve exploration efficiency, their planning remains constrained by textual representations, which cannot adequately capture spatial occupancy or scene geometry--critical factors for navigation decisions. We explore whether Vision-Language Models (VLMs) can achieve mapless visual navigation using only onboard RGB/RGB-D streams, unlocking their potential for spatial perception and planning. We achieve this through an imagination-powered navigation framework, ImagineNav++, which imagines future observation images from candidate robot views and translates navigation planning into a simple best-view image selection problem for VLMs. First, a future-view imagination module distills human navigation preferences to generate semantically meaningful viewpoints with high exploration potential. These imagined views then serve as visual prompts for the VLM to identify the most informative viewpoint. To maintain spatial consistency, we develop a selective foveation memory mechanism, which hierarchically integrates keyframe observations via a sparse-to-dense framework, constructing a compact yet comprehensive memory for long-term spatial reasoning. This approach transforms goal-oriented navigation into a series of tractable point-goal navigation tasks. Extensive experiments on open-vocabulary object and instance navigation benchmarks show that ImagineNav++ achieves SOTA performance in mapless settings, even surpassing most map-based methods, highlighting the importance of scene imagination and memory in VLM-based spatial reasoning.