LLMs Underperform Graph-Based Parsers on Supervised Relation Extraction for Complex Graphs
arXiv cs.AI / 4/13/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper evaluates four LLMs for supervised relation extraction against a graph-based parser, focusing on cases where the underlying sentence-level linguistic graphs are highly complex.
- Across six relation extraction datasets with varying graph sizes and complexities, the graph-based parser’s relative advantage grows as the number of relations in the input increases.
- Results indicate that LLMs still underperform smaller, non-LLM architectures for complex graphs, especially in relation-rich settings.
- The authors conclude that lightweight graph-based parsing is a better practical choice than LLM-based approaches when dealing with highly complex linguistic graphs for knowledge-graph construction.
Related Articles

Black Hat Asia
AI Business

Apple is building smart glasses without a display to serve as an AI wearable
THE DECODER

Why Fashion Trend Prediction Isn’t Enough Without Generative AI
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
Chatbot vs Voicebot: The Real Business Decision Nobody Talks About
Dev.to