AI Navigate

CR-Bench: Evaluating the Real-World Utility of AI Code Review Agents

arXiv cs.AI / 3/13/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The study introduces CR-Bench, a benchmarking dataset, and CR-Evaluator, a fine-grained evaluation pipeline for code review agents.
  • It addresses the lack of standardized benchmarks and granular evaluation protocols for reasoning-intensive code review tasks and the high cost of false positives.
  • The evaluation compares single-shot and Reflexion-based agents across two frontier models, revealing a low signal-to-noise ratio when the goal is to identify all hidden issues.
  • The results show that relying on resolution-rate metrics can mask true progress and hamper developer productivity.
  • Together, CR-Bench and CR-Evaluator lay the groundwork for studying AI-based code review in real-world software engineering workflows as LLM-based systems transition from benchmarks to practice.

Abstract

Recent advances in frontier large language models have enabled code review agents that operate in open-ended, reasoning-intensive settings. However, the lack of standardized benchmarks and granular evaluation protocols makes it difficult to assess behavior of code review agents beyond coarse success metrics, particularly for tasks where false positives are costly. To address this gap, we introduce CR-Bench, a benchmarking dataset, and CR-Evaluator, a fine-grained evaluation pipeline for code review agents. Using these tools, we conduct a preliminary study evaluating both a single-shot agent and a Reflexion-based agent across two frontier models. We find that code review agents can exhibit a low signal-to-noise ratio when designed to identify all hidden issues, obscuring true progress and developer productivity when measured solely by resolution rates. Our analysis identifies the hidden trade-off between issue resolution and spurious findings, revealing a frontier that constrains effective agent design. Together, CR-Bench and CR-Evaluator provide a timely foundation for studying and developing code review agents as LLM-based systems transition from controlled benchmarks to real-world software engineering workflows.