LLM-Guided Agentic Floor Plan Parsing for Accessible Indoor Navigation of Blind and Low-Vision People
arXiv cs.AI / 4/28/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces an agentic LLM-guided framework that turns a single indoor floor plan image into a structured, retrievable knowledge base to support safe navigation for blind and low-vision (BLV) users without costly per-building infrastructure.
- The approach uses a multi-agent parsing phase that builds a spatial knowledge graph via a self-correcting pipeline with iterative retry loops and corrective feedback.
- A separate path-planning phase generates accessible navigation instructions while a Safety Evaluator agent checks for hazards along each proposed route.
- Experiments on the UMBC Math and Psychology building (MP-1, MP-3) and the CVC-FP benchmark show higher success rates than the strongest single-call LLM baseline (Claude 3.7 Sonnet), especially for short and medium routes.
- Overall, the results suggest the workflow improves reliability and scalability for accessible indoor navigation by combining structured parsing, planning, and safety evaluation.
Related Articles
LLMs will be a commodity
Reddit r/artificial

Indian Developers: How to Build AI Side Income with $0 Capital in 2026
Dev.to

What it feels like to have to have Qwen 3.6 or Gemma 4 running locally
Reddit r/LocalLLaMA

Dex lands $5.3M to grow its AI-driven talent matching platform
Tech.eu

AI Citation Registry: Why Daily Updates Leave No Time for Data Structuring
Dev.to