Surface Sensitivity in Lean 4 Autoformalization
arXiv cs.LG / 4/28/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies whether paraphrase-induced differences in Lean 4 autoformalization outputs stem from real semantic disagreement or from shallower failures.
- Using 60 deterministic paraphrase rules on datasets ProofNet# and miniF2F, the authors test multiple GPT-family models and several open-weight 7B autoformalizers.
- They find that when both the original and paraphrased outputs successfully compile, the paired formalizations are semantically equivalent under BEq+ and structurally very similar under GTED.
- In contrast, paraphrasing strongly changes whether outputs compile, indicating that the main sensitivity comes from compilation-boundary failures rather than semantic divergence.
- The authors recommend future training and benchmark designs to focus on compile-boundary robustness and to explicitly distinguish compile-conditional equivalence from surface consistency.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Same Agent, Different Risk | How Microsoft 365 Copilot Grounding Changes the Security Model | Rahsi Framework™
Dev.to

Claude Haiku for Low-Cost AI Inference: Patterns from a Horse Racing Prediction System
Dev.to

How We Built an Ambient AI Clinical Documentation Pipeline (and Saved Doctors 8+ Hours a Week)
Dev.to

🦀 PicoClaw Deep Dive — A Field Guide to Building an Ultra-Light AI Agent in Go 🐹
Dev.to