Paper Reconstruction Evaluation: Evaluating Presentation and Hallucination in AI-written Papers

arXiv cs.CL / 4/3/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Paper Reconstruction Evaluation (PaperRecon), a framework that tests AI-written papers by generating a new full draft from an automatically created overview and comparing it to the original source paper.
  • It evaluates two separate risk/quality dimensions: Presentation quality (via a rubric) and Hallucination risk (via agentic evaluation grounded in the original paper).
  • The authors release PaperWrite-Bench, comprising 51 post-2025 top-venue papers across diverse domains to support systematic evaluation of coding-agent paper writing.
  • Experimental results show a trade-off between AI systems: ClaudeCode tends to score higher on presentation but averages over 10 hallucinations per paper, while Codex reduces hallucinations at the expense of presentation quality.
  • The work is positioned as an early step toward standardizing reliability and risk assessment for AI-driven research-paper generation.

Abstract

This paper introduces the first systematic evaluation framework for quantifying the quality and risks of papers written by modern coding agents. While AI-driven paper writing has become a growing concern, rigorous evaluation of the quality and potential risks of AI-written papers remains limited, and a unified understanding of their reliability is still lacking. We introduce Paper Reconstruction Evaluation (PaperRecon), an evaluation framework in which an overview (overview.md) is created from an existing paper, after which an agent generates a full paper based on the overview and minimal additional resources, and the result is subsequently compared against the original paper. PaperRecon disentangles the evaluation of the AI-written papers into two orthogonal dimensions, Presentation and Hallucination, where Presentation is evaluated using a rubric and Hallucination is assessed via agentic evaluation grounded in the original paper source. For evaluation, we introduce PaperWrite-Bench, a benchmark of 51 papers from top-tier venues across diverse domains published after 2025. Our experiments reveal a clear trade-off: while both ClaudeCode and Codex improve with model advances, ClaudeCode achieves higher presentation quality at the cost of more than 10 hallucinations per paper on average, whereas Codex produces fewer hallucinations but lower presentation quality. This work takes a first step toward establishing evaluation frameworks for AI-driven paper writing and improving the understanding of its risks within the research community.