VeriAct: Beyond Verifiability -- Agentic Synthesis of Correct and Complete Formal Specifications

arXiv cs.AI / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper evaluates how well LLM-based (classical and prompt-based) methods can synthesize Java Modeling Language (JML) specifications, including attempts to improve results via prompt optimization with verification feedback.
  • It finds a key limitation: higher verifier pass rates do not necessarily imply that synthesized specifications are correct and complete, since the verifier can miss over- or under-constrained specs.
  • To measure this gap, the authors introduce Spec-Harness, an evaluation framework using symbolic verification to assess specification correctness and completeness beyond what standard verifier acceptance indicates.
  • They propose VeriAct, a verification-guided agentic loop (LLM planning, synthesis/repair, code execution, verification, and Spec-Harness feedback) designed to iteratively produce specs that are both verifiable and genuinely correct and complete.
  • Experiments on two benchmark datasets indicate VeriAct outperforms prompt-based and prompt-optimized baselines, reducing the fraction of “verifier-accepted but wrong/incomplete” specifications.

Abstract

Formal specifications play a central role in ensuring software reliability and correctness. However, automatically synthesizing high-quality formal specifications remains a challenging task, often requiring domain expertise. Recent work has applied large language models to generate specifications in Java Modeling Language (JML), reporting high verification pass rates. But does passing a verifier mean that the specification is actually correct and complete? In this work, we first conduct a comprehensive evaluation comparing classical and prompt-based approaches for automated JML specification synthesis. We then investigate whether prompt optimization can push synthesis quality further by evolving prompts through structured verification feedback. While optimization improves verifier pass rates, we find a clear performance ceiling. More critically, we propose Spec-Harness, an evaluation framework that measures specification correctness and completeness through symbolic verification, revealing that a large fraction of verifier-accepted specifications, including optimized ones, are in fact incorrect or incomplete, over- or under-constraining both inputs and outputs in ways invisible to the verifier. To push beyond this ceiling, we propose VeriAct, a verification-guided agentic framework that iteratively synthesizes and repairs specifications through a closed loop of LLM-driven planning, code execution, verification, and Spec-Harness feedback. Our experiments on two benchmark datasets show that VeriAct outperforms both prompt-based and prompt-optimized baselines, producing specifications that are not only verifiable but also correct and complete.