SciNLP: A Domain-Specific Benchmark for Full-Text Scientific Entity and Relation Extraction in NLP

arXiv cs.CL / 4/6/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • SciNLP is introduced as a domain-specific benchmark for full-text scientific entity and relation extraction in NLP, targeting a gap where most existing datasets cover only specific paper sections.
  • The benchmark includes 60 manually annotated NLP full-text publications with 6,409 entities and 1,648 relations, and claims to be the first such full-text annotation dataset in the NLP domain.
  • Experiments compare SciNLP against similar datasets and evaluate state-of-the-art supervised models, showing that extraction performance varies by academic text length and model capabilities.
  • Cross-dataset evaluation indicates SciNLP can yield significant performance improvements for certain baseline models.
  • Using models trained on SciNLP, the authors build an automatically constructed fine-grained NLP knowledge graph with an average node degree of 3.3, aimed at improving downstream applications, and they make the dataset publicly available.

Abstract

Structured information extraction from scientific literature is crucial for capturing core concepts and emerging trends in specialized fields. While existing datasets aid model development, most focus on specific publication sections due to domain complexity and the high cost of annotating scientific texts. To address this limitation, we introduce SciNLP - a specialized benchmark for full-text entity and relation extraction in the Natural Language Processing (NLP) domain. The dataset comprises 60 manually annotated full-text NLP publications, covering 6,409 entities and 1,648 relations. Compared to existing research, SciNLP is the first dataset providing full-text annotations of entities and their relationships in the NLP domain. To validate the effectiveness of SciNLP, we conducted comparative experiments with similar datasets and evaluated the performance of state-of-the-art supervised models on this dataset. Results reveal varying extraction capabilities of existing models across academic texts of different lengths. Cross-comparisons with existing datasets show that SciNLP achieves significant performance improvements on certain baseline models. Using models trained on SciNLP, we implemented automatic construction of a fine-grained knowledge graph for the NLP domain. Our KG has an average node degree of 3.3 per entity, indicating rich semantic topological information that enhances downstream applications. The dataset is publicly available at: https://github.com/AKADDC/SciNLP.

SciNLP: A Domain-Specific Benchmark for Full-Text Scientific Entity and Relation Extraction in NLP | AI Navigate