Like seriously, it’s not just ChatGPT... it’s Claude, Grok, Gemini… all of them feel way more locked down than before.
I genuinely don’t get it.
What’s the point of pouring nearly Trillions into this tech if it ends up feeling borderline unusable half the time?
And yeah, I’m literally paying for this.
It feels like companies assume every user is a programmer who use it only for programming.
But a lot of us just want to be creative, write stories, experiment with ideas, or just mess around without hitting a wall every two seconds.
I’m not out here asking how to build a bomb or anything illegal.
I just want to create stuff without the AI acting like I’m about to commit a felony.
And before anyone says “just use local models”… nah. Not everyone has a expensive hardware lying around. Subscriptions exist for a reason.
I understand this safety stuff but this is just dumb..
So like… is there any hope this gets better?
Will AI eventually get smart enough to understand actual intent instead of playing it ultra safe all the time?
Or is this just how it’s gonna be going forward?
Because if this is the future… idk man, it’s kinda disappointing
This ain't it...
[link] [comments]

