QED: An Open-Source Multi-Agent System for Generating Mathematical Proofs on Open Problems
arXiv cs.AI / 4/28/2026
📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research
Key Points
- The paper tests whether frontier LLMs can generate genuinely novel, nontrivial proofs for open research problems, finding that benchmark success does not translate reliably to research-grade proving.
- It identifies seven specific failure modes in LLM-based proof generation, including context contamination, citation hallucinations, vague reasoning at key steps, unstable proof plans, and a single-model bottleneck.
- The authors argue the main gap is primarily a system-design issue rather than raw model capability, and they map each failure mode to an architectural choice in their approach.
- They introduce QED, an open-source multi-agent proof system whose design choices target those failure modes, and report correct proofs on 3 out of 5 expert-provided open problems in applied analysis and PDEs.
- QED is released as open-source software, with evaluations verified by domain experts as original and nontrivial for the successful cases.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Everyone Wants AI Agents. Fewer Teams Are Ready for the Messy Business Context Behind Them
Dev.to
Free Registration & $20K Prize Pool: 2nd MLC-SLM Challenge 2026 on Multilingual Speech LLMs [N]
Reddit r/MachineLearning
How to Build Traceable and Evaluated LLM Workflows Using Promptflow, Prompty, and OpenAI
MarkTechPost
AI 编程工具对比 2026:Claude Code vs Cursor vs Gemini CLI vs Codex
Dev.to