HCRE: LLM-based Hierarchical Classification for Cross-Document Relation Extraction with a Prediction-then-Verification Strategy

arXiv cs.CL / 4/10/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies whether Large Language Models (LLMs) improve cross-document relation extraction compared with the common “small language model + classifier” approach, finding that LLMs do not consistently outperform SLMs.
  • It argues that the main limitation comes from handling many predefined relation types, which makes classification difficult for LLMs during inference.
  • To address this, the authors propose HCRE, an LLM-based hierarchical classification framework that uses a hierarchical relation tree to narrow candidate relations level-by-level.
  • Because hierarchical classification can suffer from error propagation, the method adds a prediction-then-verification strategy with multi-view verification at each hierarchy level.
  • Experiments report that HCRE achieves better performance than existing baselines, supporting the effectiveness of hierarchical classification plus verification.

Abstract

Cross-document relation extraction (RE) aims to identify relations between the head and tail entities located in different documents. Existing approaches typically adopt the paradigm of ``\textit{Small Language Model (SLM) + Classifier}''. However, the limited language understanding ability of SLMs hinders further improvement of their performance. In this paper, we conduct a preliminary study to explore the performance of Large Language Models (LLMs) in cross-document RE. Despite their extensive parameters, our findings indicate that LLMs do not consistently surpass existing SLMs. Further analysis suggests that the underperformance is largely attributed to the challenges posed by the numerous predefined relations. To overcome this issue, we propose an LLM-based \underline{H}ierarchical \underline{C}lassification model for cross-document \underline{RE} (HCRE), which consists of two core components: 1) an LLM for relation prediction and 2) a \textit{hierarchical relation tree} derived from the predefined relation set. This tree enables the LLM to perform hierarchical classification, where the target relation is inferred level by level. Since the number of child nodes is much smaller than the size of the entire predefined relation set, the hierarchical relation tree significantly reduces the number of relation options that LLM needs to consider during inference. However, hierarchical classification introduces the risk of error propagation across levels. To mitigate this, we propose a \textit{prediction-then-verification} inference strategy that improves prediction reliability through multi-view verification at each level. Extensive experiments show that HCRE outperforms existing baselines, validating its effectiveness.