Empirical Evidence of Complexity-Induced Limits in Large Language Models on Finite Discrete State-Space Problems with Explicit Validity Constraints

arXiv cs.CL / 4/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a controlled benchmarking framework that tests large language/reasoning models on nine discrete, finite state-space problems with complexity parameterization.
  • It uses deterministic validators with explicit validity constraints so only fully valid solutions count, enabling precise measurement of reasoning robustness as difficulty increases.
  • Across open and proprietary models, results show a phase-transition-like “reasoning collapse,” where accuracy remains high at low complexity but drops sharply past task-specific complexity thresholds.
  • The degradation is typically accompanied by inconsistent reasoning traces, constraint violations, loss of state tracking, and overconfident incorrect outputs, and longer reasoning chains do not reliably improve correctness.
  • The authors argue that these findings expose limitations of static aggregate benchmarks and motivate evaluation methods that explicitly test reasoning under progressively increasing complexity.

Abstract

Large Language Models (LLMs) are increasingly described as possessing strong reasoning capabilities, supported by high performance on mathematical, logical, and planning benchmarks. However, most existing evaluations rely on aggregate accuracy over fixed datasets, obscuring how reasoning behavior evolves as task complexity increases. In this work, we introduce a controlled benchmarking framework to systematically evaluate the robustness of reasoning in Large Reasoning Models (LRMs) under progressively increasing problem complexity. We construct a suite of nine classical reasoning tasks: Boolean Satisfiability, Cryptarithmetic, Graph Coloring, River Crossing, Tower of Hanoi, Water Jug, Checker Jumping, Sudoku, and Rubik's Cube, each parameterized to precisely control complexity while preserving underlying semantics. Using deterministic validators, we evaluate multiple open and proprietary LRMs across low, intermediate, and high complexity regimes, ensuring that only fully valid solutions are accepted. Our results reveal a consistent phase transition like behavior: models achieve high accuracy at low complexity but degrade sharply beyond task specific complexity thresholds. We formalize this phenomenon as reasoning collapse. Across tasks, we observe substantial accuracy declines, often exceeding 50%, accompanied by inconsistent reasoning traces, constraint violations, loss of state tracking, and confidently incorrect outputs. Increased reasoning length does not reliably improve correctness, and gains in one problem family do not generalize to others. These findings highlight the need for evaluation methodologies that move beyond static benchmarks and explicitly measure reasoning robustness under controlled complexity.