Space now with memory

Dev.to / 4/20/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • The article describes a project called “Space,” an AI chat experience designed to remember what the user previously told it, without requiring sign-up or login.
  • Space assigns a user ID the first time you talk to it, sends that ID with subsequent messages, and uses it to retrieve prior conversation context stored in a database.
  • The system stores key statements as “memories” and can reference past conversations in later sessions, also loosely tracking the user’s mood.
  • To make memory work despite LLMs being stateless by default, the author implemented a “fake memory” approach using Go, PostgreSQL, and pattern-matching logic.

I Built an AI That Remembers You (No Login Required)

Every AI chat I used felt the same.

You talk.
It replies.
You refresh…

And suddenly it’s:

“Hi, how can I help you today?”

Like bro… we were just talking.

So I built Space.

What Space Actually Is

It’s not “just another chatbot”.

It:

  • Remembers things you tell it
  • Tracks your mood (kind of)
  • Brings up past conversations
  • Talks to you like it knows you

And here’s the twist:

👉 No sign-up. No login.

Wait… So How Does It Remember You?

When you first talk to Space, it quietly creates a user ID for you.

That ID gets sent with every message after that.

That’s it.

No accounts. No passwords.

Behind the scenes:

  • Your conversations are stored in a database
  • Important things you say get saved as “memories”
  • Next time you show up with the same ID… it picks up where you left off

So it doesn’t remember devices.

It remembers you (through your ID).

The Fun Part (aka the problem)

LLMs don’t remember anything.

So I had to fake memory.

I used:

  • Go
  • PostgreSQL
  • Some very questionable pattern matching 😅

How I Made It “Feel Human”

If you say:

“I feel overwhelmed”
“I like coding”

I store that.

Then before every reply, I tell the AI:

“This person has been stressed. Don’t ignore that.”

And suddenly it goes from:

“How can I help?”

to:

“Hey… how have you been feeling since last time?”

That shift is crazy.

The Weird Realization

Memory matters more than the model.

A decent model + context
beats a powerful model with none.

Every time.

One Thing I Was Careful About

This is not therapy.

It doesn’t give:

  • medical advice
  • serious life guidance

It just shows up and talks like a thoughtful human would.

Final Thought

We’re all building AI that talks.

I think the real game is building AI that remembers.

If you were building this…

Would you store everything?

Or only what actually matters?