AI Navigate

Novelty Adaptation Through Hybrid Large Language Model (LLM)-Symbolic Planning and LLM-guided Reinforcement Learning

arXiv cs.AI / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses how autonomous agents struggle with novelties in open-world environments when planning domains lack necessary operators for novel objects.
  • It proposes a neuro-symbolic architecture that integrates symbolic planning, reinforcement learning, and a large language model to handle novel objects.
  • The LLM provides common sense reasoning to identify missing operators, helps generate plans with a symbolic planner, and writes reward functions to guide RL for newly identified operators.
  • The method reportedly outperforms state-of-the-art approaches in operator discovery and operator learning in continuous robotic domains.

Abstract

In dynamic open-world environments, autonomous agents often encounter novelties that hinder their ability to find plans to achieve their goals. Specifically, traditional symbolic planners fail to generate plans when the robot's planning domain lacks the operators that enable it to interact appropriately with novel objects in the environment. We propose a neuro-symbolic architecture that integrates symbolic planning, reinforcement learning, and a large language model (LLM) to learn how to handle novel objects. In particular, we leverage the common sense reasoning capability of the LLM to identify missing operators, generate plans with the symbolic AI planner, and write reward functions to guide the reinforcement learning agent in learning control policies for newly identified operators. Our method outperforms the state-of-the-art methods in operator discovery as well as operator learning in continuous robotic domains.