Built cross-model persistent memory - told GPT-5 Nano I live in Bahrain, asked Sonnet 4.6 where I live, it knew instantly

Reddit r/artificial / 4/26/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • The post claims the author built a cross-model persistent memory system that lets different AI models remember information across separate conversations.
  • It reportedly works by embedding every message and storing it, then automatically injecting the saved memory into the context when starting a new chat with any supported model.
  • The author states that GPT-5 Nano and Claude Sonnet 4.6 were able to share the same memory layer, demonstrated by answering that the user lives in Bahrain after the location was first told to GPT-5 Nano.
  • The system is presented as functioning across multiple model providers, including GPT, Claude, Gemini, Grok, and DeepSeek, rather than being limited to a single chat application.
Built cross-model persistent memory - told GPT-5 Nano I live in Bahrain, asked Sonnet 4.6 where I live, it knew instantly

https://reddit.com/link/1svixo0/video/hgwrueuekdxg1/player

No tricks, no copy-paste. Two completely different AI models, separate conversations - one remembers what the other was told.

The way it works: every message gets embedded and stored. When you open a new chat with any model, your memory is injected into context automatically. GPT, Claude, Gemini, Grok and DeepSeek - they all share the same memory layer.

So when I told GPT-5 Nano "I live in Bahrain" and then opened a fresh Claude Sonnet 4.6 conversation and asked "where do I live?" - it said "Based on your memory, you live in Bahrain 🇧🇭"

Live on asksary.com now

submitted by /u/Beneficial-Cow-7408
[link] [comments]