PRL-Bench: A Comprehensive Benchmark Evaluating LLMs' Capabilities in Frontier Physics Research

arXiv cs.AI / 4/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The PRL-Bench benchmark is proposed to evaluate LLMs’ ability to perform end-to-end physics research, focusing on exploration, long-horizon workflows, and procedural complexity rather than just domain knowledge comprehension.
  • It is built from 100 expert-curated Physical Review Letters papers (from issues since August 2025) and covers five major, theory- and computation-intensive physics subfields: astrophysics, condensed matter physics, high-energy physics, quantum information, and statistical physics.
  • Each benchmark task is designed to mimic authentic research conditions, including formulation steps that encourage exploration and objectively verifiable end-to-end workflows without relying on experiments.
  • Results across frontier models show overall performance is limited, with the best score under 50, indicating a significant gap between current LLM capabilities and the demands of real scientific research.
  • The authors position PRL-Bench as a reliable testbed for guiding and assessing the next generation of AI systems aimed at more autonomous scientific discovery.

Abstract

The paradigm of agentic science requires AI systems to conduct robust reasoning and engage in long-horizon, autonomous exploration. However, current scientific benchmarks remain confined to domain knowledge comprehension and complex reasoning, failing to evaluate the exploratory nature and procedural complexity of real-world research. In this work, we present research-oriented evaluations in theoretical and computational physics, a natural testbed with comprehensive domain knowledge, complex reasoning, and verifiable end-to-end workflows without reliance on experiments. Here we introduce PRL-Bench (Physics Research by LLMs), a benchmark designed to systematically map the capability boundaries of LLMs in executing end-to-end physics research. Constructed from 100 curated papers from the latest issues of Physical Review Letters since August 2025 and validated by domain experts, PRL-Bench covers five major theory- and computation-intensive subfields of modern physics: astrophysics, condensed matter physics, high-energy physics, quantum information, and statistical physics. Each task in the benchmark is designed to replicate the core properties of authentic scientific research, including exploration-oriented formulation, long-horizon workflows, and objective verifiability, thereby reconstructing the essential reasoning processes and research workflows of real physics research. Evaluation across frontier models shows that performance remains limited, with the best overall score below 50, revealing a pronounced gap between current LLM capabilities and the demands of real scientific research. PRL-Bench serves a reliable testbed for accessing next generation AI scientists advancing AI systems toward autonomous scientific discovery.