Counting as a minimal probe of language model reliability

arXiv cs.CL / 5/5/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses whether strong benchmark performance in language models reflects true logical competence, reliable repeated procedure execution, or pattern-matching that only imitates rule execution.
  • It introduces a new evaluation assay, “Stable Counting Capacity,” which tests a model’s ability to count repeated symbols until it fails while minimizing knowledge, semantics, ambiguity, and lexical/tokenization confounds.
  • Results across more than 100 model variants show that stable counting capacity stays far below the models’ advertised context limits.
  • The observed behavior is not consistent with open-ended logic or stable application of a learned rule, but instead with using a finite internal “count-like” state mechanism (analogous to counting on fingers).
  • After that internal resource is exhausted, the model’s apparent rule-following degrades and exact counting collapses into guessing, even when additional test-time compute is used, implying fluent output does not guarantee reliable rule following.

Abstract

Large language models perform strongly on benchmarks in mathematical reasoning, coding and document analysis, suggesting a broad ability to follow instructions. However, it remains unclear whether such success reflects general logical competence, repeated application of learned procedures, or pattern matching that mimics rule execution. We investigate this question by introducing Stable Counting Capacity, an assay in which models count repeated symbols until failure. The assay removes knowledge dependencies, semantics and ambiguity from evaluation, avoids lexical and tokenization confounds, and provides a direct measure of procedural reliability beyond standard knowledge-based benchmarks. Here we show, across more than 100 model variants, that stable counting capacity remains far below advertised context limits. Model behavior is consistent neither with open-ended logic nor with stable application of a learned rule, but instead with use of a finite set of count-like internal states, analogous to counting on fingers. Once this resource is exhausted, the appearance of rule following disappears and exact execution collapses into guessing, even with additional test-time compute. These findings show that fluent performance in current language models does not guarantee general, reliable rule following.