AI-Assisted Peer Review at Scale: The AAAI-26 AI Review Pilot

arXiv cs.AI / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper reports a first large-scale field deployment of AI-assisted peer review at AAAI-26, where every main-track submission received a clearly identified AI-generated review.
  • The system produced reviews for 22,977 full-review papers in under a day using a multi-stage pipeline that combines frontier models, tool use, and safeguards.
  • A large-scale author and program-committee survey found participants considered the AI reviews useful and, in key areas like technical accuracy and research suggestions, even preferred them to human reviews.
  • The work introduces a new benchmark and shows the proposed approach substantially outperforms a baseline that uses simple LLM-generated reviews for detecting scientific weaknesses.

Abstract

Scientific peer review faces mounting strain as submission volumes surge, making it increasingly difficult to sustain review quality, consistency, and timeliness. Recent advances in AI have led the community to consider its use in peer review, yet a key unresolved question is whether AI can generate technically sound reviews at real-world conference scale. Here we report the first large-scale field deployment of AI-assisted peer review: every main-track submission at AAAI-26 received one clearly identified AI review from a state-of-the-art system. The system combined frontier models, tool use, and safeguards in a multi-stage process to generate reviews for all 22,977 full-review papers in less than a day. A large-scale survey of AAAI-26 authors and program committee members showed that participants not only found AI reviews useful, but actually preferred them to human reviews on key dimensions such as technical accuracy and research suggestions. We also introduce a novel benchmark and find that our system substantially outperforms a simple LLM-generated review baseline at detecting a variety of scientific weaknesses. Together, these results show that state-of-the-art AI methods can already make meaningful contributions to scientific peer review at conference scale, opening a path toward the next generation of synergistic human-AI teaming for evaluating research.