Something I’ve been noticing with AI systems:
We’ve dramatically improved:
- tool use
- reasoning
- capabilities
But memory still feels broken.
Even with:
- vector databases
- long context windows
- session stitching
Models still:
- repeat instructions
- lose context
- behave inconsistently
Why?
Because memory today is mostly:
→ storage + retrieval
Not:
→ understanding what matters
Humans don’t remember everything equally.
We remember what influences decisions.
AI doesn’t (yet).
Curious how others are thinking about this:
Is memory actually “solved,” or are we missing a layer?
[link] [comments]



