AI Navigate

WalkGPT: Grounded Vision-Language Conversation with Depth-Aware Segmentation for Pedestrian Navigation

arXiv cs.CV / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • WalkGPT introduces a pixel-grounded vision-language model for grounded navigation guidance with depth-aware segmentation, addressing grounding and depth reasoning limitations of existing LVLMs.
  • The model generates conversational navigation responses along with segmentation masks and relative depth estimates to support accessibility-focused guidance without user-provided cues.
  • It features the Multi-Scale Query Projector (MSQP) and Calibrated Text Projector (CTP) and uses a Region Alignment Loss to align language embeddings with segmentation-aware representations.
  • The authors release PAVE, a large-scale benchmark of 41k pedestrian-view images with accessibility questions and depth-grounded answers for evaluating grounding, segmentation, and depth reasoning.
  • They report strong performance on grounded reasoning and segmentation, and provide source code and dataset via the project website.

Abstract

Ensuring accessible pedestrian navigation requires reasoning about both semantic and spatial aspects of complex urban scenes, a challenge that existing Large Vision-Language Models (LVLMs) struggle to meet. Although these models can describe visual content, their lack of explicit grounding leads to object hallucinations and unreliable depth reasoning, limiting their usefulness for accessibility guidance. We introduce WalkGPT, a pixel-grounded LVLM for the new task of Grounded Navigation Guide, unifying language reasoning and segmentation within a single architecture for depth-aware accessibility guidance. Given a pedestrian-view image and a navigation query, WalkGPT generates a conversational response with segmentation masks that delineate accessible and harmful features, along with relative depth estimation. The model incorporates a Multi-Scale Query Projector (MSQP) that shapes the final image tokens by aggregating them along text tokens across spatial hierarchies, and a Calibrated Text Projector (CTP), guided by a proposed Region Alignment Loss, that maps language embeddings into segmentation-aware representations. These components enable fine-grained grounding and depth inference without user-provided cues or anchor points, allowing the model to generate complete and realistic navigation guidance. We also introduce PAVE, a large-scale benchmark of 41k pedestrian-view images paired with accessibility-aware questions and depth-grounded answers. Experiments show that WalkGPT achieves strong grounded reasoning and segmentation performance. The source code and dataset are available on the \href{https://sites.google.com/view/walkgpt-26/home}{project website}.