AI Navigate

Benchmarking Zero-Shot Reasoning Approaches for Error Detection in Solidity Smart Contracts

arXiv cs.AI / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper benchmarks state-of-the-art LLMs on Solidity smart contract analysis using a balanced dataset of 400 contracts across two tasks: Error Detection (binary vulnerability classification) and Error Classification (mapping issues to specific vulnerability categories).
  • It investigates zero-shot prompting strategies, including zero-shot, zero-shot Chain-of-Thought (CoT), and zero-shot Tree-of-Thought (ToT).
  • In the Error Detection task, CoT and ToT substantially increase recall to approximately 95-99%, but typically reduce precision, indicating more false positives in a more sensitive decision regime.
  • In the Error Classification task, Claude 3 Opus achieves the best Weighted F1-score (90.8) under the ToT prompt, with CoT following closely.
  • The results highlight trade-offs between recall and precision in AI-assisted vulnerability detection for smart contracts and demonstrate notable performance gains with advanced prompting techniques.

Abstract

Smart contracts play a central role in blockchain systems by encoding financial and operational logic. Still, their susceptibility to subtle security flaws poses significant risks of financial loss and erosion of trust. LLMs create new opportunities for automating vulnerability detection, yet the effectiveness of different prompting strategies and model choices in real-world contexts remains uncertain. This paper evaluates state-of-the-art LLMs on Solidity smart contract analysis using a balanced dataset of 400 contracts under two tasks: (i) Error Detection, where the model performs binary classification to decide whether a contract is vulnerable, and (ii) Error Classification, where the model must assign the predicted issue to a specific vulnerability category. Models are evaluated using zero-shot prompting strategies, including zero-shot, zero-shot Chain-of-Thought (CoT), and zero-shot Tree-of-Thought (ToT). In the Error Detection task, CoT and ToT substantially increase recall (often approaching \approx 95--99\%), but typically reduce precision, indicating a more sensitive decision regime with more false positives. In the Error Classification task, Claude 3 Opus attains the best Weighted F1-score (90.8) under the ToT prompt, followed closely by its CoT.