HCRE: LLM-based Hierarchical Classification for Cross-Document Relation Extraction with a Prediction-then-Verification Strategy
arXiv cs.CL / 4/10/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies whether Large Language Models (LLMs) improve cross-document relation extraction compared with the common “small language model + classifier” approach, finding that LLMs do not consistently outperform SLMs.
- It argues that the main limitation comes from handling many predefined relation types, which makes classification difficult for LLMs during inference.
- To address this, the authors propose HCRE, an LLM-based hierarchical classification framework that uses a hierarchical relation tree to narrow candidate relations level-by-level.
- Because hierarchical classification can suffer from error propagation, the method adds a prediction-then-verification strategy with multi-view verification at each hierarchy level.
- Experiments report that HCRE achieves better performance than existing baselines, supporting the effectiveness of hierarchical classification plus verification.
Related Articles
CIA is trusting AI to help analyze intel from human spies
Reddit r/artificial

LLM API Pricing in 2026: I Put Every Major Model in One Table
Dev.to

i generated AI video on a GTX 1660. here's what it actually takes.
Dev.to
Meta-Optimized Continual Adaptation for planetary geology survey missions for extreme data sparsity scenarios
Dev.to

How To Optimize Enterprise AI Energy Consumption
Dev.to