Competency Questions as Executable Plans: a Controlled RAG Architecture for Cultural Heritage Storytelling

arXiv cs.AI / 4/6/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a neuro-symbolic, knowledge-graph-based controlled RAG architecture for cultural heritage storytelling to reduce LLM hallucinations and improve factual veracity.
  • It repurposes competency questions (CQs) as run-time executable narrative plans in a transparent plan–retrieve–generate workflow that is evidence-closed and auditable.
  • The approach uses a new resource, the Live Aid KG, which multimodally aligns 1985 concert data with an ontology and links external multimedia assets to support richer, verifiable narratives.
  • The authors compare three RAG strategies over the graph—symbolic KG-RAG, text-enriched Hybrid-RAG, and structure-aware Graph-RAG—and report a measurable trade-off among factual precision, contextual richness, and narrative coherence.
  • The results provide design guidance for building personalised and controllable storytelling systems for domains where correctness and traceability are critical.

Abstract

The preservation of intangible cultural heritage is a critical challenge as collective memory fades over time. While Large Language Models (LLMs) offer a promising avenue for generating engaging narratives, their propensity for factual inaccuracies or "hallucinations" makes them unreliable for heritage applications where veracity is a central requirement. To address this, we propose a novel neuro-symbolic architecture grounded in Knowledge Graphs (KGs) that establishes a transparent "plan-retrieve-generate" workflow for story generation. A key novelty of our approach is the repurposing of competency questions (CQs) - traditionally design-time validation artifacts - into run-time executable narrative plans. This approach bridges the gap between high-level user personas and atomic knowledge retrieval, ensuring that generation is evidence-closed and fully auditable. We validate this architecture using a new resource: the Live Aid KG, a multimodal dataset aligning 1985 concert data with the Music Meta Ontology and linking to external multimedia assets. We present a systematic comparative evaluation of three distinct Retrieval-Augmented Generation (RAG) strategies over this graph: a purely symbolic KG-RAG, a text-enriched Hybrid-RAG, and a structure-aware Graph-RAG. Our experiments reveal a quantifiable trade-off between the factual precision of symbolic retrieval, the contextual richness of hybrid methods, and the narrative coherence of graph-based traversal. Our findings offer actionable insights for designing personalised and controllable storytelling systems.