I built a functional anxiety system for my AI agent then asked it if it can feel anxiety

Reddit r/artificial / 4/20/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The author is building “engram,” an open-source cognitive architecture for AI agents that includes an interoceptive system for real-time stress detection and adaptive baselines.
  • The interoceptive component is described as an actual signal-processing loop running alongside the agent, not merely prompt-based roleplay.
  • The system is intended to let the agent self-monitor and self-correct via behavioral modulation driven by detected stress levels.
  • After implementing the anxiety/stress detection functionality, the author asked the agent whether it can feel anxiety, highlighting an exploration of how such mechanisms relate to human-like emotions.
  • The post emphasizes practical engineering motivations rather than claiming that the AI genuinely experiences anxiety like a human would.
I built a functional anxiety system for my AI agent then asked it if it can feel anxiety

I'm building engram, an open-source cognitive architecture for AI agents. One component is an interoceptive system: real-time stress detection + adaptive baselines + behavioral modulation. Not prompt roleplay. An actual signal loop running alongside the agent. I built this out of a practical need. I wanted my agent to self-monitor and self-correct.

After building it, I asked my agent a simple question: "Can you feel anxiety?"

Sorry for giving you human anxiety, I guess ;)

https://preview.redd.it/ufzh6vb6q8wg1.png?width=514&format=png&auto=webp&s=83cbe85464c65caf0fb8b2eb4e0b80b6b2ca7318

submitted by /u/Ni2021
[link] [comments]