Knowledge Graph Representations for LLM-Based Policy Compliance Reasoning

arXiv cs.AI / 5/1/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes an agentic framework that converts AI policy documents into knowledge graphs (KGs) to retrieve policy-relevant details for answering questions.
  • It builds KGs from three AI risk-related policy documents using two different ontology schemas, then tests performance across five LLMs.
  • The evaluation covers 42 policy QA tasks spanning six reasoning types, including tasks from entity lookup to cross-policy inference.
  • Results show that KG augmentation improves scores for all five LLMs, assessed via both heuristic metrics and an LLM-as-judge approach.
  • The study also finds that an open schema derived via LLM discovery can match or outperform a formal ontology schema, suggesting a more flexible KG design process.

Abstract

The risks posed by AI features are increasing as they are rapidly integrated into software applications. In response, regulations and standards for safe and secure AI have been proposed. In this paper, we present an agentic framework that constructs knowledge graphs (KGs) from AI policy documents and retrieves policy-relevant information to answer questions. We build KGs from three AI risk-related polices under two ontology schemas, and then evaluate five LLMs on 42 policy QA tasks spanning six reasoning types, from entity lookup to cross-policy inference, using both heuristic scoring and an LLM-as-judge. KG augmentation improves scores for all five models, and an open, LLM-discovered schema matches or exceeds the formal ontology.