Learning to Generate Formally Verifiable Step-by-Step Logic Reasoning via Structured Formal Intermediaries

arXiv cs.AI / 4/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that LLMs trained with outcome-rewarded reinforcement learning can still produce unreliable intermediate reasoning steps even when the final answer is correct.
  • It introduces PRoSFI (Process Reward over Structured Formal Intermediates), which rewards only reasoning chains whose structured intermediate steps are verified by a formal prover.
  • Instead of requiring direct formal proofs from the model, PRoSFI has a 7B-scale model generate structured intermediates aligned with its natural-language reasoning, then checks each step formally.
  • The method is presented as improving reasoning reliability while maintaining accuracy, effectively steering models toward more credible, machine-checkable reasoning.
  • The work positions structured formal intermediates plus formal verification as a simple, effective training approach for trustworthy reasoning models.

Abstract

Large language models (LLMs) have recently demonstrated impressive performance on complex, multi-step reasoning tasks, especially when post-trained with outcome-rewarded reinforcement learning Guo et al. 2025. However, it has been observed that outcome rewards often overlook flawed intermediate steps, leading to unreliable reasoning steps even when final answers are correct. To address this unreliable reasoning, we propose PRoSFI (Process Reward over Structured Formal Intermediates), a novel reward method that enhances reasoning reliability without compromising accuracy. Instead of generating formal proofs directly, which is rarely accomplishable for a modest-sized (7B) model, the model outputs structured intermediate steps aligned with its natural language reasoning. Each step is then verified by a formal prover. Only fully validated reasoning chains receive high rewards. The integration of formal verification guides the model towards generating step-by-step machine-checkable proofs, thereby yielding more credible final answers. PRoSFI offers a simple and effective approach to training trustworthy reasoning models.