AGEL-Comp: A Neuro-Symbolic Framework for Compositional Generalization in Interactive Agents

arXiv cs.AI / 4/30/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces AGEL-Comp, a neuro-symbolic agent architecture aimed at improving LLM agents’ compositional generalization in interactive environments where they often fail systemically.
  • AGEL-Comp uses a dynamic Causal Program Graph (CPG) as a world model, representing procedural and causal knowledge as a directed hypergraph grounded in the agent’s actions.
  • It adds an Inductive Logic Programming (ILP) engine that synthesizes new Horn clauses from experiential feedback, allowing symbolic knowledge to evolve through interaction.
  • A hybrid reasoning core combines an LLM that proposes candidate sub-goals with verification by a Neural Theorem Prover (NTP) to ensure logical consistency.
  • Using the Retro Quest simulation protocol, the authors report that AGEL-Comp outperforms pure LLM-based models in compositional generalization scenarios.

Abstract

Large Language Model (LLM)-based agents exhibit systemic failures in compositional generalization, limiting their robustness in interactive environments. This work introduces AGEL-Comp, a neuro-symbolic AI agent architecture designed to address this challenge by grounding actions of the agent. AGEL-Comp integrates three core innovations: (1) a dynamic Causal Program Graph (CPG) as a world model, representing procedural and causal knowledge as a directed hypergraph; (2) an Inductive Logic Programming (ILP) engine that synthesizes new Horn clauses from experiential feedback, grounding symbolic knowledge through interaction; and (3) a hybrid reasoning core where an LLM proposes a set of candidate sub-goals that are verified for logical consistency by a Neural Theorem Prover (NTP). Together, these components operationalize a deduction--abduction learning cycle: enabling the agent to deduce plans and abductively expand its symbolic world model, while a neural adaptation phase keeps its reasoning engine aligned with new knowledge. We propose an evaluation protocol within the \texttt{Retro Quest} simulation environment to probe for compositional generalization scenarios to evaluate our AGEL agent. Our findings clearly indicate the better performance of our AGEL model over pure LLM-based models. Our framework presents a principled path toward agents that build an explicit, interpretable, and compositionally structured understanding of their world.