Construction of Knowledge Graph based on Language Model

arXiv cs.CL / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper reviews recent methods for building knowledge graphs (KGs) using pre-trained language models (PLMs), aiming to reduce reliance on manual annotation and improve automation.
  • It explains how PLMs can leverage language understanding and generation to extract KG components such as entities and relations from unstructured text.
  • The authors propose a new Hyper-Relational Knowledge Graph construction framework called LLHKG that uses a lightweight LLM.
  • The paper reports that LLHKG’s KG construction performance is comparable to GPT-3.5, suggesting lighter models may achieve similar effectiveness for KG tasks.

Abstract

Knowledge Graph (KG) can effectively integrate valuable information from massive data, and thus has been rapidly developed and widely used in many fields. Traditional KG construction methods rely on manual annotation, which often consumes a lot of time and manpower. And KG construction schemes based on deep learning tend to have weak generalization capabilities. With the rapid development of Pre-trained Language Models (PLM), PLM has shown great potential in the field of KG construction. This paper provides a comprehensive review of recent research advances in the field of construction of KGs using PLM. In this paper, we explain how PLM can utilize its language understanding and generation capabilities to automatically extract key information for KGs, such as entities and relations, from textual data. In addition, We also propose a new Hyper-Relarional Knowledge Graph construction framework based on lightweight Large Language Model (LLM) named LLHKG and compares it with previous methods. Under our framework, the KG construction capability of lightweight LLM is comparable to GPT3.5.