AI Navigate

Reasoning Theater: Disentangling Model Beliefs from Chain-of-Thought

arXiv cs.CL / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • The paper investigates 'performative chain-of-thought' (CoT) in reasoning models, where models express strong confidence in a final answer while still generating tokens that do not reveal internal beliefs.
  • By comparing activation probing, early forced answering, and CoT monitoring across two large models, the study finds that for easier tasks, final answers can be decoded from earlier activations than the monitor can detect.
  • More complex reasoning tasks show genuine reasoning with notable belief shifts and inflection points like backtracking and 'aha' moments, indicating authentic uncertainty rather than performative behavior.
  • The research demonstrates that probe-guided early exit can reduce token generation by up to 80% on simpler tasks without accuracy loss and offers a way to detect performative reasoning, enabling more efficient and adaptive computation.

Computer Science > Computation and Language

arXiv:2603.05488 (cs)
[Submitted on 5 Mar 2026 (v1), last revised 9 Mar 2026 (this version, v2)]

Title:Reasoning Theater: Disentangling Model Beliefs from Chain-of-Thought

View a PDF of the paper titled Reasoning Theater: Disentangling Model Beliefs from Chain-of-Thought, by Siddharth Boppana and 7 other authors
View PDF HTML (experimental)
Abstract:We provide evidence of performative chain-of-thought (CoT) in reasoning models, where a model becomes strongly confident in its final answer, but continues generating tokens without revealing its internal belief. Our analysis compares activation probing, early forced answering, and a CoT monitor across two large models (DeepSeek-R1 671B & GPT-OSS 120B) and find task difficulty-specific differences: The model's final answer is decodable from activations far earlier in CoT than a monitor is able to say, especially for easy recall-based MMLU questions. We contrast this with genuine reasoning in difficult multihop GPQA-Diamond questions. Despite this, inflection points (e.g., backtracking, 'aha' moments) occur almost exclusively in responses where probes show large belief shifts, suggesting these behaviors track genuine uncertainty rather than learned "reasoning theater." Finally, probe-guided early exit reduces tokens by up to 80% on MMLU and 30% on GPQA-Diamond with similar accuracy, positioning attention probing as an efficient tool for detecting performative reasoning and enabling adaptive computation.
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
Cite as: arXiv:2603.05488 [cs.CL]
  (or arXiv:2603.05488v2 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2603.05488
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Siddharth Boppana [view email]
[v1] Thu, 5 Mar 2026 18:55:16 UTC (1,591 KB)
[v2] Mon, 9 Mar 2026 23:35:16 UTC (1,591 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CL
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.