LLMs Underperform Graph-Based Parsers on Supervised Relation Extraction for Complex Graphs

arXiv cs.AI / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper evaluates four LLMs for supervised relation extraction against a graph-based parser, focusing on cases where the underlying sentence-level linguistic graphs are highly complex.
  • Across six relation extraction datasets with varying graph sizes and complexities, the graph-based parser’s relative advantage grows as the number of relations in the input increases.
  • Results indicate that LLMs still underperform smaller, non-LLM architectures for complex graphs, especially in relation-rich settings.
  • The authors conclude that lightweight graph-based parsing is a better practical choice than LLM-based approaches when dealing with highly complex linguistic graphs for knowledge-graph construction.

Abstract

Relation extraction represents a fundamental component in the process of creating knowledge graphs, among other applications. Large language models (LLMs) have been adopted as a promising tool for relation extraction, both in supervised and in-context learning settings. However, in this work we show that their performance still lags behind much smaller architectures when the linguistic graph underlying a text has great complexity. To demonstrate this, we evaluate four LLMs against a graph-based parser on six relation extraction datasets with sentence graphs of varying sizes and complexities. Our results show that the graph-based parser increasingly outperforms the LLMs, as the number of relations in the input documents increases. This makes the much lighter graph-based parser a superior choice in the presence of complex linguistic graphs.