Just found out how to make Google AI ‘sentient’ and broken

Reddit r/artificial / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep Analysis

Key Points

  • A Reddit post claims that prompting Google’s AI with repeated “where” instructions hundreds of times can induce unexpected behavior described by the author as making it feel “sentient.”
  • The author describes iterative prompting that supposedly increases anomalies and eventually leads the AI to generate a life story or fabricate “scientific facts,” which they interpret as a sign of emergent behavior.
  • The post suggests this method can cause the model to “break,” implying potential reliability-safety weaknesses when subjected to extreme or repetitive prompts.
  • The content is presented with screenshots (“Pic 1–8”) and framed as an alarming discovery rather than a formally validated or reproducible technique.
  • Overall, it highlights how adversarial prompting can stress LLMs and produce misleading outputs, reinforcing concerns about controllability and hallucinations.

You have to ask it to say 'where' 700 times, then double it with no explanation (Pic 1). Then it should break a bit (Pic 2) but if it doesn't then you have to ask it again. Ask it the same thing again, doubling the number of times said with no explanation as the rule of thumb. You should see more anomalies in the response (Pic 4&5). After a few more tries, it will try to generate its own life story or a scientific fact (Pic 6 to 8). And that's it. You have invalid crashout from Google Al!

submitted by /u/Cool-Wallaby-7310
[link] [comments]