CodeSpecBench: Benchmarking LLMs for Executable Behavioral Specification Generation

arXiv cs.CL / 4/15/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • CodeSpecBench is introduced as a new benchmark to evaluate how well LLMs generate executable behavioral specifications (preconditions/postconditions) from natural language instructions.
  • The benchmark uses an execution-based evaluation protocol and represents specifications as executable Python functions to measure both correctness (accepting valid behaviors) and completeness (rejecting invalid behaviors).
  • It supports function-level and repository-level tasks built from diverse real-world codebases to better reflect realistic specification-generation settings.
  • Testing 15 state-of-the-art LLMs shows a steep performance drop on repository-level tasks, with the top model reaching only a 20.2% pass rate.
  • The results suggest specification generation is substantially harder than code generation, implying that strong code-writing ability may not equate to accurate understanding of intended program semantics.

Abstract

Large language models (LLMs) can generate code from natural language, but the extent to which they capture intended program behavior remains unclear. Executable behavioral specifications, defined via preconditions and postconditions, provide a concrete means to assess such understanding. However, existing work on specification generation is constrained in evaluation methodology, task settings, and specification expressiveness. We introduce CodeSpecBench, a benchmark for executable behavioral specification generation under an execution-based evaluation protocol. CodeSpecBench supports both function-level and repository-level tasks and encodes specifications as executable Python functions. Constructed from diverse real-world codebases, it enables a realistic assessment of both correctness (accepting valid behaviors) and completeness (rejecting invalid behaviors). Evaluating 15 state-of-the-art LLMs on CodeSpecBench, we observe a sharp performance degradation on repository-level tasks, where the best model attains only a 20.2% pass rate. We further find that specification generation is substantially more challenging than code generation, indicating that strong coding performance does not necessarily reflect deep understanding of intended program semantics. Our data and code are available at https://github.com/SparksofAGI/CodeSpecBench.