GPT 5.5 just leaked its chain of thought to me in codex, and it looks like an idea from 5 months ago in this sub.

Reddit r/LocalLLaMA / 5/3/2026

💬 OpinionSignals & Early TrendsIdeas & Deep Analysis

Key Points

  • A Reddit user reports that GPT 5.5-medium accessed via Codex produced output that appears to include “chain-of-thought” style reasoning during a coding task.
  • The leaked reasoning was written as step-by-step instructions focused on using absolute paths, handling Windows paths, formatting final output in specific ways, and avoiding incorrect paths.
  • The user describes the output as resembling an idea posted about five months earlier in a related subreddit discussion.
  • The report is presented as a firsthand incident from the middle of the user’s project, without official confirmation from the model provider.
  • Overall, the post raises concerns about whether certain prompting or tooling setups can expose internal reasoning traces or overly verbose intermediate logic.

https://www.reddit.com/r/LocalLLaMA/comments/1p0lnlo/make_your_ai_talk_like_a_caveman_and_decrease/

In the middle of a project I'm working on, I got this output from GPT 5.5-medium via codex:

Implemented the narrower fix in Homm3ImportUnitPreviewModelHook.cs? Need absolute path. Need know cwd absolute. v:... Use markdown. final with path. Need avoid bogus path. Use Homm3ImportUnitPreviewModelHook.cs? Format requires /abs/path. Windows abs maybe v:.... Use angle. Final no too long. Need include uncommitted. Proceed.

submitted by /u/Homeschooled316
[link] [comments]