Evaluating the Formal Reasoning Capabilities of Large Language Models through Chomsky Hierarchy

arXiv cs.CL / 4/6/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current LLM benchmarks do not systematically evaluate formal reasoning in terms of computation and complexity, especially relative to the Chomsky hierarchy of formal languages.
  • It introduces ChomskyBench, a benchmark that covers the full Chomsky hierarchy and combines natural-language process-trace evaluation with deterministic symbolic verifiability.
  • Experimental results show a clear performance stratification by hierarchy level, where increased task difficulty causes significant drops in performance and increases inference length.
  • Although larger models and more advanced inference methods improve results relatively, the study finds steep efficiency barriers—practical reliability would require prohibitively high computational costs.
  • The analysis concludes that limitations are driven more by inefficiency than by absolute capability, and it emphasizes the continued indispensability of traditional software tools for formal tasks.

Abstract

The formal reasoning capabilities of LLMs are crucial for advancing automated software engineering. However, existing benchmarks for LLMs lack systematic evaluation based on computation and complexity, leaving a critical gap in understanding their formal reasoning capabilities. Therefore, it is still unknown whether SOTA LLMs can grasp the structured, hierarchical complexity of formal languages as defined by Computation Theory. To address this, we introduce ChomskyBench, a benchmark for systematically evaluating LLMs through the lens of Chomsky Hierarchy. Unlike prior work that uses vectorized classification for neural networks, ChomskyBench is the first to combine full Chomsky Hierarchy coverage, process-trace evaluation via natural language, and deterministic symbolic verifiability. ChomskyBench is composed of a comprehensive suite of language recognition and generation tasks designed to test capabilities at each level. Extensive experiments indicate a clear performance stratification that correlates with the hierarchy's levels of complexity. Our analysis reveals a direct relationship where increasing task difficulty substantially impacts both inference length and performance. Furthermore, we find that while larger models and advanced inference methods offer notable relative gains, they face severe efficiency barriers: achieving practical reliability would require prohibitive computational costs, revealing that current limitations stem from inefficiency rather than absolute capability bounds. A time complexity analysis further indicates that LLMs are significantly less efficient than traditional algorithmic programs for these formal tasks. These results delineate the practical limits of current LLMs, highlight the indispensability of traditional software tools, and provide insights to guide the development of future LLMs with more powerful formal reasoning capabilities.