SYMBOLIZER: Symbolic Model-free Task Planning with VLMs

arXiv cs.RO / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes SYMBOLIZER, a TAMP framework that reduces reliance on handcrafted discrete symbolic models by using VLMs to infer symbolic states directly from images.
  • Instead of requiring task-specific symbolic action models or enumerating all possible objects in advance, the method only uses lifted predicates (relations among objects) and grounds them via VLM outputs to build the state representation.
  • Planning is carried out with domain-independent heuristic search that uses goal-count and width-based heuristics, avoiding learned or manually specified action models.
  • The authors report that symbolic search over the VLM-grounded state space outperforms direct VLM-based planning and matches performance of approaches using VLM-derived heuristics.
  • Extensive experiments on the ProDG and ViPlan benchmarks show state-of-the-art results, suggesting better generalization across unseen problem instances and domains with large combinatorial state spaces.

Abstract

Traditional Task and Motion Planning (TAMP) systems depend on physics models for motion planning and discrete symbolic models for task planning. Although physics model are often available, symbolic models (consisting of symbolic state interpretation and action models) must be meticulously handcrafted or learned from labeled data. This process is both resource-intensive and constrains the solution to the specific domain, limiting scalability and adaptability. On the other hand, Visual Language Models (VLMs) show desirable zero-shot visual understanding (due to their extensive training on heterogeneous data), but still achieve limited planning capabilities. Therefore, integrating VLMs with classical planning for long-horizon reasoning in TAMP problems offers high potential. Recent works in this direction still lack generality and depend on handcrafted, task-specific solutions, e.g. describing all possible objects in advance, or using symbolic action models. We propose a framework that generalizes well to unseen problem instances. The method requires only lifted predicates describing relations among objects and uses VLMs to ground them from images to obtain the symbolic state. Planning is performed with domain-independent heuristic search using goal-count and width-based heuristics, without need for action models. Symbolic search over VLM-grounded state-space outperforms direct VLM-based planning and performs on par with approaches that use a VLM-derived heuristic. This shows that domain-independent search can effectively solve problems across domains with large combinatorial state spaces. We extensively evaluate on extensively evaluate our method and achieve state-of-the-art results on the ProDG and ViPlan benchmarks.