Benchmarking Testing in Automated Theorem Proving

arXiv cs.CL / 4/28/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The article introduces a test-based framework (T) to evaluate semantic correctness in automated theorem proving, treating a theorem as correct only if all dependent successor theorems compile successfully.
  • It addresses limitations of prior evaluation methods that used lexical overlap proxies or costly manual inspection, arguing that semantic evaluation should resemble integration testing in software.
  • The authors build a benchmark from five real-world Lean 4 repositories, generating 2,206 problems with an average of 41 successor theorems per problem, extracted automatically.
  • Experimental results show that leading LLM-based systems can have high compilation success under existing metrics but much lower performance under the proposed semantic testing metric.
  • The best reported model, Claude-Sonnet-4.5, reaches only 38.9% Testing Accuracy on the full benchmark, highlighting a significant gap in current theorem generation capabilities.

Abstract

Recent advances in large language models (LLMs) have shown promise in formal theorem proving, yet evaluating semantic correctness remains challenging. Existing evaluations rely on indirect proxies such as lexical overlap with human-annotated proof, or expensive manual inspection. Inspired by the shift from lexical comparison to test-based evaluation in code generation, we propose T , a framework that evaluates the semantic correctness of formal theorems: a generated theorem is considered correct only if all dependent successor theorems compile successfully, analogous to integration testing. We construct a benchmark from 5 real-world Lean 4 repositories, comprising 2,206 problems paired with 41 successor theorems on average, automatically extracted without human effort. Experiments demonstrate that while state-of-the-art models achieve high compilation success, they perform significantly worse under our semantic metric. The best model, Claude-Sonnet-4.5, achieves only 38.9% Testing Accuracy on the full set, given both natural language proof and successor theorems as context, revealing a critical gap in current theorem generation capabilities.