CRISP: Characterizing Relative Impact of Scholarly Publications
arXiv cs.CL / 3/31/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces CRISP, a method that uses LLMs to jointly rank all cited works inside a citing paper to enable relative impact comparisons rather than evaluating citations in isolation.
- To reduce LLM positional bias, CRISP repeats the ranking three times with randomized orderings and aggregates results using majority voting.
- CRISP improves over a prior state-of-the-art impact classifier, achieving +9.5% accuracy and +8.3% F1 on a human-annotated citation dataset.
- The approach is designed to be more efficient by requiring fewer LLM calls and can run competitively with an open-source model, supporting scalable and cost-effective analysis.
- The authors release the produced rankings, impact labels, and a codebase to encourage follow-on research on citation impact characterization.
Related Articles
Why AI agent teams are just hoping their agents behave
Dev.to

Harness as Code: Treating AI Workflows Like Infrastructure
Dev.to

How to Make Claude Code Better at One-Shotting Implementations
Towards Data Science

The Crypto AI Agent Stack That Costs $0/Month to Run
Dev.to

Bag of Freebies for Training Object Detection Neural Networks
Dev.to