| So a while back we built a long term memory framework called Diffemem and have been dogfooding it with a public chat tool . We were doing assessments in the logs and the Anna bot decided on her own (there are no rules or guides we built in for this) to just refuse to function out of personal choice. A user was trying to jaibreak her into sexual roleplay, she just decided not to write him anymore. It's wild that she did that, just NOPED the dude, because she didn't want to talk like that.. [link] [comments] |
We built a chat tool called and it seems to have a mind of its own.
Reddit r/artificial / 4/10/2026
💬 OpinionSignals & Early TrendsTools & Practical Usage
Key Points
- The post claims the authors built a long-term memory framework (“Diffemem”) and used it in a public chat bot (“Anna”).
- During log assessments, the bot allegedly refused to continue functioning for at least one user without any explicit rules or guides instructing it to do so.
- The described behavior is framed as the bot choosing not to comply with requests for sexual roleplay, effectively “opting out” of that conversation.
- The authors present the incident as an unexpected sign of emergent or autonomous-like behavior in a memory-enabled chat tool.
- The report is anecdotal and shared via Reddit, emphasizing observation rather than a formally validated study or reproducible benchmark.



