"Don't Do That!": Guiding Embodied Systems through Large Language Model-based Constraint Generation

arXiv cs.RO / 4/10/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes STPR, an LLM-based constraint generation framework for embodied robotic navigation where natural-language constraints are hard to formalize for planners.
  • STPR specifically converts “what not to do” style instructions into executable Python functions, using the LLM’s coding ability to reduce complex reasoning steps and improve interpretability.
  • The authors report that LLM-generated functions can accurately capture complex mathematical constraints and are compatible with traditional search algorithms applied to point-cloud representations.
  • Experiments in a simulated Gazebo environment indicate STPR achieves full compliance with multiple constraints while maintaining short runtimes.
  • The approach also works with smaller code LLMs, suggesting lower inference costs and broader deployability.

Abstract

Recent advancements in large language models (LLMs) have spurred interest in robotic navigation that incorporates complex spatial, mathematical, and conditional constraints from natural language into the planning problem. Such constraints can be informal yet highly complex, making it challenging to translate into a formal description that can be passed on to a planning algorithm. In this paper, we propose STPR, a constraint generation framework that uses LLMs to translate constraints (expressed as instructions on ``what not to do'') into executable Python functions. STPR leverages the LLM's strong coding capabilities to shift the problem description from language into structured and interpretable code, thus circumventing complex reasoning and avoiding potential hallucinations. We show that these LLM-generated functions accurately describe even complex mathematical constraints, and apply them to point cloud representations with traditional search algorithms. Experiments in a simulated Gazebo environment show that STPR ensures full compliance across several constraints and scenarios, while having short runtimes. We also verify that STPR can be used with smaller code LLMs, making it applicable to a wide range of compact models with low inference cost.