PeeriScope: A Multi-Faceted Framework for Evaluating Peer Review Quality

arXiv cs.CL / 4/28/2026

📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The paper introduces PeeriScope, a modular framework to assess the quality of scholarly peer reviews across multiple dimensions despite growing scale and variability.
  • It combines structured features, rubric-guided evaluations using large language models, and supervised prediction for systematic and interpretable assessment.
  • PeeriScope is designed to be open and integrable, offering both a public interface and a documented API for deployment and further research development.
  • The included demonstration shows applications such as reviewer self-assessment, editorial triage, and large-scale auditing.
  • PeeriScope is available via a live demo and through API services on GitHub, enabling external teams to adopt or extend the system.

Abstract

The increasing scale and variability of peer review in scholarly venues has created an urgent need for systematic, interpretable, and extensible tools to assess review quality. We present PeeriScope, a modular platform that integrates structured features, rubric-guided large language model assessments, and supervised prediction to evaluate peer review quality along multiple dimensions. Designed for openness and integration, PeeriScope provides both a public interface and a documented API, supporting practical deployment and research extensibility. The demonstration illustrates its use for reviewer self-assessment, editorial triage, and large-scale auditing, and it enables the continued development of quality evaluation methods within scientific peer review. PeeriScope is available both as a live demo at https://app.reviewer.ly/app/peeriscope and via API services at https://github.com/Reviewerly-Inc/Peeriscope.

PeeriScope: A Multi-Faceted Framework for Evaluating Peer Review Quality | AI Navigate