QED-Nano: Teaching a Tiny Model to Prove Hard Theorems

arXiv cs.AI / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces QED-Nano, a 4B open math-logic model post-trained to generate Olympiad-level proofs, addressing the cost and opacity of proprietary theorem-proving pipelines.
  • Its training approach uses three stages: supervised fine-tuning from DeepSeek-Math-V2 for proof-writing style, reinforcement learning with rubric-based rewards, and expanded RL with a reasoning cache that iteratively summarize-and-refines long proofs.
  • QED-Nano reportedly outperforms larger open proof models (e.g., Nomos-1 and GPT-OSS-120B) and approaches the performance of proprietary systems like Gemini 3 Pro while using far lower inference cost.
  • To enable reproducibility and further research, the authors release the full training pipeline, including the QED-Nano/QED-Nano-SFT models, FineProofs datasets, and the associated training and evaluation code.

Abstract

Proprietary AI systems have recently demonstrated impressive capabilities on complex proof-based problems, with gold-level performance reported at the 2025 International Mathematical Olympiad (IMO). However, the training pipelines behind these systems remain largely undisclosed, and their reliance on large "internal" models and scaffolds makes them expensive to run, difficult to reproduce, and hard to study or improve upon. This raises a central question: can small, open models also be trained to achieve competitive reasoning performance on difficult Olympiad-level math? In this paper, we answer this question by building QED-Nano, a 4B model post-trained for Olympiad-level proofs. Our training recipe has three stages: (1) supervised fine-tuning to imbue good proof-writing styles by distilling from DeepSeek-Math-V2, (2) reinforcement learning (RL) with rubric-based rewards, and (3) expanding RL with a reasoning cache, which decomposes long proofs into iterative summarize-and-refine cycles and enables stronger test-time reasoning. QED-Nano surpasses the proof-generation performance of much larger open models, including Nomos-1 and GPT-OSS-120B, and approaches the performance of proprietary models like Gemini 3 Pro, at a fraction of the inference cost. To support further research on open mathematical reasoning, we release the full QED-Nano pipeline, including the QED-Nano and QED-Nano-SFT models, the FineProofs-SFT and FineProofs-RL datasets, and the training and evaluation code.