Can LLMs Prove Robotic Path Planning Optimality? A Benchmark for Research-Level Algorithm Verification

arXiv cs.RO / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces the first benchmark for evaluating LLMs on approximation-ratio proofs in robotic path planning, spanning 34 research-grade tasks.
  • It finds that current state-of-the-art LLMs struggle to produce fully valid proofs without external domain knowledge.
  • Providing task-specific in-context lemmas substantially improves reasoning quality, more effective than generic chain-of-thought prompts or supplying the ground-truth ratio.
  • The authors provide a fine-grained error analysis to characterize common logical failures and show how to mitigate them with targeted context augmentation.
  • The work highlights opportunities for integrating LLMs with domain knowledge to advance theory-guided robotics research.

Abstract

Robotic path planning problems are often NP-hard, and practical solutions typically rely on approximation algorithms with provable performance guarantees for general cases. While designing such algorithms is challenging, formally proving their approximation optimality is even more demanding, which requires domain-specific geometric insights and multi-step mathematical reasoning over complex operational constraints. Recent Large Language Models (LLMs) have demonstrated strong performance on mathematical reasoning benchmarks, yet their ability to assist with research-level optimality proofs in robotic path planning remains under-explored. In this work, we introduce the first benchmark for evaluating LLMs on approximation-ratio proofs of robotic path planning algorithms. The benchmark consists of 34 research-grade proof tasks spanning diverse planning problem types and complexity levels, each requiring structured reasoning over algorithm descriptions, problem constraints, and theoretical guarantees. Our evaluation of state-of-the-art proprietary and open-source LLMs reveals that even the strongest models struggle to produce fully valid proofs without external domain knowledge. However, providing LLMs with task-specific in-context lemmas substantially improves reasoning quality, a factor that is more effective than generic chain-of-thought prompting or supplying the ground-truth approximation ratio as posterior knowledge. We further provide fine-grained error analysis to characterize common logical failures and hallucinations, and demonstrate how each error type can be mitigated through targeted context augmentation.