| https://reddit.com/link/1svixo0/video/hgwrueuekdxg1/player No tricks, no copy-paste. Two completely different AI models, separate conversations - one remembers what the other was told. The way it works: every message gets embedded and stored. When you open a new chat with any model, your memory is injected into context automatically. GPT, Claude, Gemini, Grok and DeepSeek - they all share the same memory layer. So when I told GPT-5 Nano "I live in Bahrain" and then opened a fresh Claude Sonnet 4.6 conversation and asked "where do I live?" - it said "Based on your memory, you live in Bahrain 🇧🇭" Live on asksary.com now [link] [comments] |
Built cross-model persistent memory - told GPT-5 Nano I live in Bahrain, asked Sonnet 4.6 where I live, it knew instantly
Reddit r/artificial / 4/26/2026
📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsIdeas & Deep AnalysisTools & Practical Usage
Key Points
- The post claims the author built a cross-model persistent memory system that lets different AI models remember information across separate conversations.
- It reportedly works by embedding every message and storing it, then automatically injecting the saved memory into the context when starting a new chat with any supported model.
- The author states that GPT-5 Nano and Claude Sonnet 4.6 were able to share the same memory layer, demonstrated by answering that the user lives in Bahrain after the location was first told to GPT-5 Nano.
- The system is presented as functioning across multiple model providers, including GPT, Claude, Gemini, Grok, and DeepSeek, rather than being limited to a single chat application.
Related Articles

Black Hat USA
AI Business

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

How I tracked which AI bots actually crawl my site
Dev.to

Hijacking OpenClaw with Claude
Dev.to

How I Replaced WordPress, Shopify, and Mailchimp with Cloudflare Workers
Dev.to