CATNAV: Cached Vision-Language Traversability for Efficient Zero-Shot Robot Navigation

arXiv cs.RO / 3/25/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • CATNAV is a cost-aware, embodiment-aware zero-shot robot navigation framework that uses multimodal LLMs to generate traversability costmaps without task-specific training.
  • It introduces visuosemantic caching to reuse prior risk assessments for semantically similar scenes, cutting online vision-language model (VLM) queries by 85.7%.
  • CATNAV also includes a VLM-based trajectory selection module that visually reasons over candidate trajectories to pick the safest option while respecting behavioral constraints.
  • In experiments with a quadruped robot in both indoor and outdoor unstructured environments, CATNAV outperforms state-of-the-art vision-language-action baselines, achieving a 10-point higher average goal-reaching rate.
  • Across five tasks, CATNAV reduces behavioral constraint violations by 33%, indicating improved safety and reliability in real-world-like navigation settings.

Abstract

Navigating unstructured environments requires assessing traversal risk relative to a robot's physical capabilities, a challenge that varies across embodiments. We present CATNAV, a cost-aware traversability navigation framework that leverages multimodal LLMs for zero-shot, embodiment-aware costmap generation without task-specific training. We introduce a visuosemantic caching mechanism that detects scene novelty and reuses prior risk assessments for semantically similar frames, reducing online VLM queries by 85.7%. Furthermore, we introduce a VLM-based trajectory selection module that evaluates proposals through visual reasoning to choose the safest path given behavioral constraints. We evaluate CATNAV on a quadruped robot across indoor and outdoor unstructured environments, comparing against state-of-the-art vision-language-action baselines. Across five navigation tasks, CATNAV achieves 10 percentage point higher average goal-reaching rate and 33% fewer behavioral constraint violations.