Can Coding Agents Reproduce Findings in Computational Materials Science?

arXiv cs.CL / 5/4/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces AutoMat, a benchmark designed to test whether LLM-based coding agents can reproduce scientific claims in computational materials science, going beyond coding performance alone.
  • AutoMat evaluates three linked capabilities: reconstructing underspecified procedures from limited information, using specialized toolchains, and judging whether the produced evidence actually supports a claim.
  • Using claims curated from real materials science papers and testing multiple coding-agent setups across foundation models, the study finds overall reproduction success is low.
  • The best-performing configuration reaches only a 54.1% success rate, with failures most common when workflows must be reconstructed from paper text and when agents deviate from or incompletely follow required methods.
  • The authors position AutoMat as both a reproducibility benchmark for AI-for-science and a diagnostic tool to identify current weaknesses of agentic systems in scientific workflows.

Abstract

Large language models are increasingly deployed as autonomous coding agents and have achieved remarkably strong performance on software engineering benchmarks. However, it is unclear whether such success transfers to computational scientific workflows, where tasks require not only strong coding ability, but also the ability to navigate complex, domain-specific procedures and to interpret results in the context of scientific claims. To address this question, we present AutoMat, a benchmark for evaluating LLM-based agents' ability to reproduce claims from computational materials science. AutoMat poses three interrelated challenges: recovering underspecified computational procedures, navigating specialized toolchains, and determining whether the resulting evidence supports a claim. By working closely with subject matter experts, we curate a set of claims from real materials science papers to test whether coding agents can recover and execute the end-to-end workflow needed to support (or undermine) such claims. We then evaluate multiple representative coding agent settings across several foundation models. Our results show that current LLM-based agents obtain low overall success rates on AutoMat, with the best-performing setting achieving a success rate of only 54.1%. Error analysis further reveals that agents perform worst when workflows must be reconstructed from paper text alone and that they fail primarily due to incomplete procedures, methodological deviations, and execution fragility. Taken together, these findings position AutoMat as both a benchmark for computational scientific reproducibility and a tool for diagnosing the current limitations of agentic systems in AI-for-science settings.