One issue I keep running into while experimenting with local AI agents is that most systems are basically stateless.
Once a conversation resets, everything the agent "learned" disappears. That means agents often end up rediscovering the same preferences, decisions, or context over and over again.
I've been experimenting with different approaches to persistent memory for agents. Some options I've seen people try:
• storing conversation history and doing retrieval over it
• structured knowledge stores
• explicit "long-term memory" systems that agents can query
The approach I've been experimenting with lately is exposing a memory system through MCP so agents can store and retrieve things like:
• user preferences
• project decisions
• debugging insights
• useful facts discovered during workflows
The idea is to treat these more like "facts worth remembering" rather than just raw conversation history.
I put together a small prototype to explore this idea: https://github.com/ptobey/local-memory-mcp
One example I've been testing is an agent remembering travel preferences and later using those to generate trip ideas based on past conversations.
Curious how others here are approaching this problem.
Are people leaning more toward:
• vector retrieval over past conversations
• structured memory systems
• explicit long-term memory tools for agents?
[link] [comments]




