Knowledge Graphs Generation from Cultural Heritage Texts: Combining LLMs and Ontological Engineering for Scholarly Debates
arXiv cs.AI / 4/10/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes ATR4CH, a five-step methodology (annotation schema, pipeline design, ontology integration, refinement, and evaluation) for converting Cultural Heritage texts into RDF knowledge graphs using large language models (LLMs).
- It validates the approach with a case study focused on authenticity assessment debates, demonstrating that the method can capture not only entities and metadata but also hypotheses, evidence, and discourse-level representations.
- Experiments using a sequential pipeline of three LLMs (Claude Sonnet 3.7, Llama 3.3 70B, and GPT-4o-mini) achieve strong performance for metadata and evidence extraction, with more moderate scores for entity recognition and hypothesis/discourse-related tasks.
- The authors find that smaller models can perform competitively, suggesting ATR4CH can be deployed in a more cost-effective way for institutions with varying resources.
- A key limitation is that results are demonstrated on Wikipedia-only inputs, and the generated KGs still require human oversight during post-processing for scholarly reliability.
Related Articles
CIA is trusting AI to help analyze intel from human spies
Reddit r/artificial

LLM API Pricing in 2026: I Put Every Major Model in One Table
Dev.to

i generated AI video on a GTX 1660. here's what it actually takes.
Dev.to
Meta-Optimized Continual Adaptation for planetary geology survey missions for extreme data sparsity scenarios
Dev.to

How To Optimize Enterprise AI Energy Consumption
Dev.to