AI hallucinations are well reported. They’re also one of the biggest reasons people hesitate to trust or adopt these systems.
That hesitation makes sense.
But I’ve been thinking about something that doesn’t get discussed as much:
What if AI hallucinations aren’t some weird machine failure…
What if they’re actually a reflection of how humans already think?
At a technical level, hallucinations happen because AI fills gaps.
When it doesn’t “know,” it predicts.
It generates the most plausible next piece of information based on patterns it has seen before.
Sometimes that works.
Sometimes it produces something completely wrong… delivered with absolute confidence.
Now zoom out.
Humans do something… uncomfortably similar.
We also fill gaps.
- We remember things that didn’t happen quite the way we think
- We confidently explain things we only partially understand
- We build narratives that feel true, even when they aren’t
Psychology has a name for part of this: confirmation bias
We tend to notice, favour, and reinforce information that supports what we already believe.
Not because we’re trying to lie. Because it’s efficient.
There’s also something deeper going on.
AI is trained on human-created data at massive scale.
Everything from peer-reviewed research to blog posts, opinions, half-truths, and straight-up nonsense.
| AI | Humans |
|---|---|
| Predicts the most likely answer | Leans toward the most familiar belief |
| Fills gaps with plausible output | Fills gaps with assumptions or memory |
| Sounds confident even when wrong | Sounds confident even when wrong |
| Trained on internet-scale data | Trained on life experience + culture |
It doesn’t separate truth from confidence. It learns patterns of expression.
So when it hallucinates, it’s not inventing behaviour out of nowhere. It’s remixing patterns it learned from us. Including our inconsistencies. Including our overconfidence. Including our tendency to “sound right” before being right.
Some researchers even argue hallucinations are unavoidable because the system is optimized to answer, not to say “I don’t know.”
Which, again, feels… familiar.
So maybe the better question isn’t: “How do we eliminate AI hallucinations?”
But: “Why are we so surprised by them?”
If anything, AI is forcing something into the open:
That confident, coherent-sounding information has never been the same thing as truth.
We’ve just been more comfortable when the illusion came from humans instead of machines.
Curious where people land on this?
Are AI hallucinations a technical flaw we’ll eventually solve…
Or are they a mirror we’re not entirely ready to look into?
[link] [comments]