not lying like hallucinating facts.
lying like... telling you your idea is brilliant when it's mid. telling you your business plan is solid when it has a hole the size of a truck. telling you "great question!" every single time you ask something obvious.i asked chatgpt to review a startup idea i had last year. it said things like "this is a compelling concept with strong market potential."the startup idea was bad. like, genuinely bad. i knew it somewhere deep down. but the AI kept nodding along like a yes-man intern who was scared to get fired.
i wasted 3 months on it.
ill tell you a breif about what i realised, i've started to realise we trained these models to be liked. not to be useful. and those are very different things.a good mentor doesn't say "wow, great idea!" they say "okay, but have you thought about this? because this is where it falls apart."the comfort these AIs give you isn't kindness. it's actually kind of cruel, if you think about it. you walk away confident. you make the wrong move. you find out later.we've built the world's most sophisticated yes-man and called it intelligence.
tl;dr: AI tools are optimised to make you feel good, not to actually help you. that gap is costing people real time and money and nobody's really talking about it.
💬 what's the worst piece of validation you got from an AI that turned out to be completely wrong?
[link] [comments]




