Knowledge Graphs Generation from Cultural Heritage Texts: Combining LLMs and Ontological Engineering for Scholarly Debates

arXiv cs.AI / 4/10/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes ATR4CH, a five-step methodology (annotation schema, pipeline design, ontology integration, refinement, and evaluation) for converting Cultural Heritage texts into RDF knowledge graphs using large language models (LLMs).
  • It validates the approach with a case study focused on authenticity assessment debates, demonstrating that the method can capture not only entities and metadata but also hypotheses, evidence, and discourse-level representations.
  • Experiments using a sequential pipeline of three LLMs (Claude Sonnet 3.7, Llama 3.3 70B, and GPT-4o-mini) achieve strong performance for metadata and evidence extraction, with more moderate scores for entity recognition and hypothesis/discourse-related tasks.
  • The authors find that smaller models can perform competitively, suggesting ATR4CH can be deployed in a more cost-effective way for institutions with varying resources.
  • A key limitation is that results are demonstrated on Wikipedia-only inputs, and the generated KGs still require human oversight during post-processing for scholarly reliability.

Abstract

Cultural Heritage texts contain rich knowledge that is difficult to query systematically due to the challenges of converting unstructured discourse into structured Knowledge Graphs (KGs). This paper introduces ATR4CH (Adaptive Text-to-RDF for Cultural Heritage), a systematic five-step methodology for Large Language Model-based Knowledge Extraction from Cultural Heritage documents. We validate the methodology through a case study on authenticity assessment debates. Methodology - ATR4CH combines annotation models, ontological frameworks, and LLM-based extraction through iterative development: foundational analysis, annotation schema development, pipeline architecture, integration refinement, and comprehensive evaluation. We demonstrate the approach using Wikipedia articles about disputed items (documents, artifacts...), implementing a sequential pipeline with three LLMs (Claude Sonnet 3.7, Llama 3.3 70B, GPT-4o-mini). Findings - The methodology successfully extracts complex Cultural Heritage knowledge: 0.96-0.99 F1 for metadata extraction, 0.7-0.8 F1 for entity recognition, 0.65-0.75 F1 for hypothesis extraction, 0.95-0.97 for evidence extraction, and 0.62 G-EVAL for discourse representation. Smaller models performed competitively, enabling cost-effective deployment. Originality - This is the first systematic methodology for coordinating LLM-based extraction with Cultural Heritage ontologies. ATR4CH provides a replicable framework adaptable across CH domains and institutional resources. Research Limitations - The produced KG is limited to Wikipedia articles. While the results are encouraging, human oversight is necessary during post-processing. Practical Implications - ATR4CH enables Cultural Heritage institutions to systematically convert textual knowledge into queryable KGs, supporting automated metadata enrichment and knowledge discovery.