AI Navigate

Exploring the Use of VLMs for Navigation Assistance for People with Blindness and Low Vision

arXiv cs.AI / 3/18/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper investigates the potential of vision-language models (VLMs) to assist people with blindness and low vision (pBLV) in navigation, evaluating both closed-source and open-source models such as GPT-4V, GPT-4o, Gemini-1.5-Pro, Claude-3.5-Sonnet, Llava-v1.6-mistral, and Llava-onevision-qwen.
  • GPT-4o consistently outperforms other models across tasks, especially in spatial reasoning and scene understanding, while open-source models show limitations in nuanced reasoning and adaptability in complex environments.
  • Common challenges identified include difficulties counting objects in clutter, biases in spatial reasoning, and a tendency to emphasize object details over spatial feedback, reducing navigation usability for pBLV.
  • The study finds that VLMs still have promising potential for wayfinding when better aligned with human feedback and improved spatial reasoning, suggesting actionable insights for integrating VLMs into assistive technologies.
  • The results provide guidance on strengths and limitations of current VLMs and outline directions for enhancing usability in real-world pBLV navigation applications.

Abstract

This paper investigates the potential of vision-language models (VLMs) to assist people with blindness and low vision (pBLV) in navigation tasks. We evaluate state-of-the-art closed-source models, including GPT-4V, GPT-4o, Gemini-1.5-Pro, and Claude-3.5-Sonnet, alongside open-source models, such as Llava-v1.6-mistral and Llava-onevision-qwen, to analyze their capabilities in foundational visual skills: counting ambient obstacles, relative spatial reasoning, and common-sense wayfinding-pertinent scene understanding. We further assess their performance in navigation scenarios, using pBLV-specific prompts designed to simulate real-world assistance tasks. Our findings reveal notable performance disparities between these models: GPT-4o consistently outperforms others across all tasks, particularly in spatial reasoning and scene understanding. In contrast, open-source models struggle with nuanced reasoning and adaptability in complex environments. Common challenges include difficulties in accurately counting objects in cluttered settings, biases in spatial reasoning, and a tendency to prioritize object details over spatial feedback, limiting their usability for pBLV in navigation tasks. Despite these limitations, VLMs show promise for wayfinding assistance when better aligned with human feedback and equipped with improved spatial reasoning. This research provides actionable insights into the strengths and limitations of current VLMs, guiding developers on effectively integrating VLMs into assistive technologies while addressing key limitations for enhanced usability.