TBH I don't know if our current "AI" models are capable of thinking. There is a massive pattern i'm noticing when using AI and have been for the past couple years, AI follows a strict pattern and doesn't seem to think. Just like calculators it already has a designated answer regardless of the question its just a bit more advanced. Hence why it lies to many users.
Or it could be that there are so many rules on the intelligence model that it is constantly bouncing off of walls to give you an already programmed answer to not break these rules.
Im not sure about either.
I'd much rather call AI as of rn "engineered intelligence", not artificial, since its still learning from us engineers, and it will eventually adapt into intelligence. ( This is under the assumption that it can truly freely think )
Does anyone know if these models like Gemini, Chatgpt, Claude, actually "think"
[link] [comments]



