SciImpact: A Multi-Dimensional, Multi-Field Benchmark for Scientific Impact Prediction

arXiv cs.CL / 4/21/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • SciImpact is introduced as a large-scale, multi-dimensional benchmark for predicting scientific impact across 19 fields, addressing the limitation of citation-only evaluations.
  • The benchmark combines heterogeneous data sources and targeted web crawling to measure multiple influence signals, including citations, awards, media attention, patent references, and artifact adoption.
  • SciImpact contains 215,928 contrastive paper pairs designed to capture meaningful impact differences in both short-term and long-term scenarios (e.g., Best Paper Award vs. Nobel Prize).
  • Evaluations of 11 widely used LLMs show that off-the-shelf models vary greatly by dimension and field, while multi-task supervised fine-tuning enables smaller open LLMs (around 4B parameters) to outperform larger models (around 30B) and beat certain strong closed-source models (e.g., o4-mini).
  • The authors position SciImpact as a challenging benchmark and a useful resource for developing models that can reason about scientific impact beyond citations.

Abstract

The rapid growth of scientific literature calls for automated methods to assess and predict research impact. Prior work has largely focused on citation-based metrics, leaving limited evaluation of models' capability to reason about other impact dimensions. To this end, we introduce SciImpact, a large-scale, multi-dimensional benchmark for scientific impact prediction spanning 19 fields. SciImpact captures various forms of scientific influence, ranging from citation counts to award recognition, media attention, patent reference, and artifact adoption, by integrating heterogeneous data sources and targeted web crawling. It comprises 215,928 contrastive paper pairs reflecting meaningful impact differences in both short-term (e.g., Best Paper Award) and long-term settings (e.g., Nobel Prize). We evaluate 11 widely used large language models (LLMs) on SciImpact. Results show that off-the-shelf models exhibit substantial variability across dimensions and fields, while multi-task supervised fine-tuning consistently enables smaller LLMs (e.g., 4B) to markedly outperform much larger models (e.g., 30B) and surpass powerful closed-source LLMs (e.g., o4-mini). These results establish SciImpact as a challenging benchmark and demonstrate its value for multi-dimensional, multi-field scientific impact prediction. Our project homepage is https://flypig23.github.io/sciimpact-homepage/