I've been away from open source for a while. But the recent explosion of LLMs reignited my curiosity — and one question I couldn't shake:
Why are these models so powerful yet so forgetful?
The answer is in their design. LLMs are trained to be both a reasoner and a knowledge base — fused, frozen at training time. Whatever they knew that day is all they'll ever know. Brilliant reasoning. Zero growth.
When something new happens — a conversation, a decision, a failure — there's nowhere for it to go. The workarounds are painful: retrain the model, stuff the context window, use RAG. None of these are memory. They're duct tape.
So I built Stash. Not to fix the LLM — but to fill the gap it leaves behind. A self-hosted layer that captures what your agent experiences, synthesizes it into a knowledge graph, tracks goals, learns from failures, and reasons across sessions. The model stays frozen. Stash grows.
I know there are other experiments in this space — that's a good sign. The problem is real. Stash is my honest answer to it.
Built by someone who thought the industry forgot to finish the job — and who has always believed that the most dangerous words in any field are: "that's just how it works."
Github: https://alash3al.github.io/stash


