| Why BrainDB?Inspired by Karpathy's LLM wiki idea — give an LLM a persistent external memory it can read and write. BrainDB takes that further by adding structure, retrieval, and a graph on top of the "plain markdown files" baseline.
[link] [comments] |
BrainDB: Karpathy's 'LLM wiki' idea, but as a real DB with typed entities and a graph
Reddit r/LocalLLaMA / 4/20/2026
💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical UsageModels & Research
Key Points
- BrainDB is inspired by Karpathy’s “LLM wiki” idea: giving an LLM persistent external memory it can read and write, but implemented as a structured database with retrieval and graph features.
- Unlike stateless RAG, BrainDB stores typed, persistent entities (e.g., thoughts, facts, sources, rules) with explicit relations like supports/contradicts/derived_from and performs fuzzy+semantic search plus short graph traversal.
- The system returns a ranked graph neighborhood (not a bag of retrieved text chunks) and uses temporal decay so older or stale items fade while frequently accessed ones remain salient.
- Compared with classic graph databases, BrainDB is purpose-built for LLM agents with an HTTP API for tool-calling, semantically meaningful fields (e.g., certainty, importance), automatic provenance, built-in rule injection, and retrieval scoring using Postgres extensions.
- BrainDB is positioned as a more queryable alternative to flat Markdown memories by extracting and linking facts back to sources automatically and only loading full text when an agent explicitly requests it.
Related Articles

Black Hat USA
AI Business

Black Hat Asia
AI Business
![Runtime security for AI agents: risk scoring, policy enforcement, and rollback for production agent pipeline [P]](/_next/image?url=https%3A%2F%2Fpreview.redd.it%2Fjaatbenjg9wg1.jpg%3Fwidth%3D140%26height%3D80%26auto%3Dwebp%26s%3D43ed5a4d6806da42e7feccd461f2fe78add2eae0&w=3840&q=75)
Runtime security for AI agents: risk scoring, policy enforcement, and rollback for production agent pipeline [P]
Reddit r/MachineLearning

Token Estimate for Qwen 3.5-397B. Based on official source only :)
Reddit r/LocalLLaMA

Claude Code Harness Engineering: Hướng Dẫn Đầy Đủ
Dev.to