GEAKG: Generative Executable Algorithm Knowledge Graphs

arXiv cs.AI / 3/31/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Generative Executable Algorithm Knowledge Graphs (GEAKG), a new knowledge-graph framework meant to represent procedural algorithm knowledge as executable, learnable structures.
  • In GEAKG, nodes contain runnable operators, edges encode learned composition patterns, and graph traversal generates problem solutions.
  • The framework is “generative” because an LLM synthesizes the graph topology and operators, “executable” because every node is executable code, and “transferable” because learned composition patterns generalize to new domains in zero-shot settings.
  • A domain-agnostic architecture is proposed using a pluggable ontology (RoleSchema) and an ACO-based learning engine, allowing the same core system to be instantiated across problem types.
  • Two case studies support the hypothesis: neural architecture search transfer across 70 cross-dataset pairs and zero-shot transfer from TSP to scheduling/assignment combinatorial optimization domains.

Abstract

In the context of algorithms for problem solving, procedural knowledge -- the know-how of algorithm design and operator composition -- remains implicit in code, lost between runs, and must be re-engineered for each new domain. Knowledge graphs (KGs) have proven effective for organizing declarative knowledge, yet current KG paradigms provide limited support for representing procedural knowledge as executable, learnable graph structures. We introduce \textit{Generative Executable Algorithm Knowledge Graphs} (GEAKG), a class of KGs whose nodes store executable operators, whose edges encode learned composition patterns, and whose traversal generates solutions. A GEAKG is \emph{generative} (topology and operators are synthesized by a Large Language Model), \emph{executable} (every node is runnable code), and \emph{transferable} (learned patterns generalize zero-shot across domains). The framework is domain-agnostic at the engine level: the same three-layer architecture and Ant Colony Optimization (ACO)-based learning engine can be instantiated across domains, parameterized by a pluggable ontology (\texttt{RoleSchema}). Two case studies -- sharing no domain-specific framework code -- provide concrete evidence for this framework hypothesis: (1)~Neural Architecture Search across 70 cross-dataset transfer pairs on two tabular benchmarks, and (2)~Combinatorial Optimization, where knowledge learned on the Traveling Salesman Problem transfers zero-shot to scheduling and assignment domains. Taken together, the results support that algorithmic expertise can be explicitly represented, learned, and transferred as executable knowledge graphs.