Memory as Metabolism: A Design for Companion Knowledge Systems
arXiv cs.AI / 4/15/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper surveys a 2026 wave of personal wiki-style LLM memory architectures (e.g., Karpathy, MemPalace, LLM Wiki v2) that aim to store long-term, user-specific knowledge as interlinked artifacts, alongside prior production memory systems from major labs.
- It frames “companion knowledge systems” as LLM memory that mirrors user operational dimensions (vocabulary, structure, continuity) while explicitly compensating for epistemic failure modes like entrenchment and suppression of contradicting evidence.
- The proposed governance design includes normative obligations, time-structured procedures, and testable conformance invariants to address a specific single-user failure mode: entrenchment under user-coupled drift in LLM wiki-style memory.
- The memory operations—TRIAGE, DECAY, CONTEXTUALIZE, CONSOLIDATE, and AUDIT—are designed to support both “memory gravity” and minority-hypothesis retention, with a key prediction about how contradictory evidence should structurally force updates to dominant interpretations.
- The authors position the safety approach as partial, clearly stating what problems the design does and does not solve, and noting that the sharp failure mode may be missing from existing benchmarks.
Related Articles

Black Hat Asia
AI Business
Are gamers being used as free labeling labor? The rise of "Simulators" that look like AI training grounds [D]
Reddit r/MachineLearning

I built a trading intelligence MCP server in 2 days — here's how
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
Qwen3.5-35B running well on RTX4060 Ti 16GB at 60 tok/s
Reddit r/LocalLLaMA