Why is disabling thinking for coding models a good idea?

Reddit r/LocalLLaMA / 4/28/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • The article asks for an explanation of why some practitioners suggest disabling “thinking” in coding models used in agent-style prompting/encoding.
  • It notes that the author has observed the recommendation but could not find supporting reasoning or evidence.
  • The post primarily functions as a request for details, rationale, and practical guidance on when and why to turn off the model’s intermediate “thinking” behavior.
  • It is framed as a community discussion rather than a new product release or formal technical announcement.

I've seen several people recommend disabling thinking for models when used in agent encoding, but I haven't been able to find any reasoning behind it.

Could you please share details on this topic?

submitted by /u/ThingRexCom
[link] [comments]