Newbie AI question

Reddit r/artificial / 5/1/2026

💬 OpinionIdeas & Deep Analysis

Key Points

  • The post questions whether today’s AI models (e.g., Gemini, ChatGPT, Claude) can truly “think,” arguing that they often follow strict patterns and predefined answers.
  • The author speculates that AI may be limited by engineered rules or safety constraints, which could cause it to “bounce” toward approved responses rather than freely reasoning.
  • They propose reframing AI as “engineered intelligence” rather than “artificial intelligence,” suggesting it is evolving by learning from engineers.
  • The central request is for others to explain whether these models actually think or how to interpret their behavior beyond surface pattern matching.

TBH I don't know if our current "AI" models are capable of thinking. There is a massive pattern i'm noticing when using AI and have been for the past couple years, AI follows a strict pattern and doesn't seem to think. Just like calculators it already has a designated answer regardless of the question its just a bit more advanced. Hence why it lies to many users.
Or it could be that there are so many rules on the intelligence model that it is constantly bouncing off of walls to give you an already programmed answer to not break these rules.
Im not sure about either.
I'd much rather call AI as of rn "engineered intelligence", not artificial, since its still learning from us engineers, and it will eventually adapt into intelligence. ( This is under the assumption that it can truly freely think )
Does anyone know if these models like Gemini, Chatgpt, Claude, actually "think"

submitted by /u/Opening-Name-5270
[link] [comments]