Language Model Planners do not Scale, but do Formalizers?

arXiv cs.CL / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper finds that while LLMs struggle to scale on complex planning tasks, it is unclear whether the same limitation applies to LLM “formalizers” that produce solver-oriented programs.
  • The authors show that LLM formalizers substantially outperform LLM planners, including reaching perfect accuracy in the BlocksWorld domain with extremely large state spaces (up to 10^165).
  • They observe that smaller formalizers degrade as problem complexity increases, but a divide-and-conquer formalizing approach can significantly improve robustness.
  • To stress-test formalizers, the paper introduces “unraveling problems” where a single line of problem description maps to exponentially many formal-language lines (e.g., in PDDL), and proposes “LLM-as-higher-order-formalizer” to manage the resulting combinatorial explosion.
  • The new paradigm has the LLM generate a program generator, decoupling token generation from the expanded formalization and search complexity.

Abstract

Recent work shows overwhelming evidence that LLMs, even those trained to scale their reasoning trace, perform unsatisfactorily when solving planning problems too complex. Whether the same conclusion holds for LLM formalizers that generate solver-oriented programs remains unknown. We systematically show that LLM formalizers greatly out-scale LLM planners, some retaining perfect accuracy in the classic BlocksWorld domain with a huge state space of size up to 10^{165}. While performance of smaller LLM formalizers degrades with problem complexity, we show that a divide-and-conquer formalizing technique can greatly improve its robustness. Finally, we introduce unraveling problems where one line of problem description realistically corresponds to exponentially many lines of formal language such as the Planning Domain Definition Language (PDDL), greatly challenging LLM formalizers. We tackle this challenge by introducing a new paradigm, namely LLM-as-higher-order-formalizer, where an LLM generates a program generator. This decouples token output from the combinatorial explosion of the underlying formalization and search space.