The Right Answer, the Wrong Direction: Why Transformers Fail at Counting and How to Fix It

arXiv cs.LG / 5/6/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • Large language models frequently fail at basic counting even when the items to count are explicitly included in the prompt.
  • Experiments across Pythia, Qwen3, and Mistral (0.4B–14B parameters) suggest the issue is not missing count representations, but a readout mismatch between internal count directions and the output digit tokens.
  • Linear probes recover the correct count from intermediate layers with near-perfect accuracy (R^2>0.99), yet the learned count directions are almost orthogonal to the output-head rows for digit tokens (|cos|≤0.032).
  • Targeted fixes show the bottleneck is in the output pathway: updating only digit rows sharply improves constrained next-token digit prediction, while a small LoRA change to attention Q/V weights improves true greedy autoregressive generation and dramatically boosts the rank of the correct digit (from 55,980 to 1).
  • The authors conclude counting failure is a geometric readout bottleneck that generalizes across character counting, addition, and list-length tasks, but does not appear in broader multi-step reasoning benchmarks like MMLU, GSM8K, and DROP.

Abstract

Large language models often fail at simple counting tasks, even when the items to count are explicitly present in the prompt. We investigate whether this failure occurs because transformers do not represent counts internally, or because they cannot convert those representations into the correct output tokens. Across three model families, Pythia, Qwen3, and Mistral, ranging from 0.4B to 14B parameters, we find strong evidence for the second explanation. Linear probes recover the correct count from intermediate layers with near-perfect accuracy (R^2>0.99), showing that the information is present. However, the internal directions that encode counts are nearly orthogonal to the output-head rows for digit tokens (|\cos|\leq0.032). In other words, the model stores the count in a form that the digit logits do not naturally read out. We localize this failure with two interventions. Updating only the digit rows of the output head (36,864 parameters) substantially improves constrained next-token digit prediction (60.7 to 100.0% across four tasks), but it does not fix autoregressive generation. By contrast, a small LoRA intervention on attention Q/V weights (7.67M parameters) improves upstream routing and achieves 83.1% +/- 7.2% in true greedy autoregressive generation. Logit-lens measurements confirm the mechanism: the correct digit's vocabulary rank drops from 55,980 to 1, a 50,000x improvement. Additional norm, logit-lens, and cross-task analyses show that the bottleneck generalizes across character counting, addition, and list length, while remaining absent from broader multi-step reasoning benchmarks, including MMLU, GSM8K, and DROP. These results identify counting failure as a geometric readout bottleneck rather than a failure of internal representation: the model knows the count but the output pathway is geometrically misaligned with the tokens needed to express it.