Benchmarking Testing in Automated Theorem Proving
arXiv cs.CL / 4/28/2026
📰 NewsDeveloper Stack & InfrastructureModels & Research
Key Points
- The article introduces a test-based framework (T) to evaluate semantic correctness in automated theorem proving, treating a theorem as correct only if all dependent successor theorems compile successfully.
- It addresses limitations of prior evaluation methods that used lexical overlap proxies or costly manual inspection, arguing that semantic evaluation should resemble integration testing in software.
- The authors build a benchmark from five real-world Lean 4 repositories, generating 2,206 problems with an average of 41 successor theorems per problem, extracted automatically.
- Experimental results show that leading LLM-based systems can have high compilation success under existing metrics but much lower performance under the proposed semantic testing metric.
- The best reported model, Claude-Sonnet-4.5, reaches only 38.9% Testing Accuracy on the full benchmark, highlighting a significant gap in current theorem generation capabilities.
Related Articles
How to Build Traceable and Evaluated LLM Workflows Using Promptflow, Prompty, and OpenAI
MarkTechPost

An improvement of the convergence proof of the ADAM-Optimizer
Dev.to
Claude Code 会话历史在哪里?如何找回你的 AI 编程对话记录
Dev.to
We built an AI that runs an entire business autonomously. Not a demo. Not a prototype. Actually running. YC-backed, here's what we learned.
Reddit r/artificial
langchain-tests==1.1.7
LangChain Releases