AI Navigate

When both Grounding and not Grounding are Bad -- A Partially Grounded Encoding of Planning into SAT (Extended Version)

arXiv cs.AI / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The authors introduce three SAT encodings that keep actions lifted while partially grounding predicates to balance lifted and grounded planning.
  • These encodings aim to avoid the exponential blowup of fully grounded representations by operating at a middle ground between lifted and grounded planning.
  • Unlike previous SAT encodings that scale quadratically with plan length, the proposed approach scales linearly, improving efficiency for longer plans.
  • Empirical results show the best encoding outperforms the state of the art in length-optimal planning on hard-to-ground domains.
  • The work offers a practical middle-ground approach for SAT-based planning that could influence future planners and applications requiring scalable, length-optimal planning.

Abstract

Classical planning problems are typically defined using lifted first-order representations, which offer compactness and generality. While most planners ground these representations to simplify reasoning, this can cause an exponential blowup in size. Recent approaches instead operate directly on the lifted level to avoid full grounding. We explore a middle ground between fully lifted and fully grounded planning by introducing three SAT encodings that keep actions lifted while partially grounding predicates. Unlike previous SAT encodings, which scale quadratically with plan length, our approach scales linearly, enabling better performance on longer plans. Empirically, our best encoding outperforms the state of the art in length-optimal planning on hard-to-ground domains.