From Stateless to Situated: Building a Psychological World for LLM-Based Emotional Support
arXiv cs.AI / 3/27/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that LLM-based emotional support systems fail not only due to response quality, but because stateless next-token generation breaks temporal continuity, stage awareness, and consent boundaries across multi-turn dialogue.
- It proposes LEKIA 2.0, a “situated” LLM architecture that separates a cognitive layer from an executive layer to keep an external situational structure stable and updatable during ongoing conversations.
- The design aims to decouple situational modeling from intervention execution so the system can maintain consistent representations of the user’s context and consent limits.
- The authors introduce a Static-to-Dynamic online evaluation protocol for multi-turn interactions and report that LEKIA achieves ~31% average absolute improvement over prompt-only baselines in deep intervention loop completion.
- Overall, the work positions external situational structure as a key condition for building stable, controllable, and situated emotional support systems.
Related Articles

GDPR and AI Training Data: What You Need to Know Before Training on Personal Data
Dev.to
Edge-to-Cloud Swarm Coordination for heritage language revitalization programs with embodied agent feedback loops
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Sector HQ Daily AI Intelligence - March 27, 2026
Dev.to

AI Crawler Management: The Definitive Guide to robots.txt for AI Bots
Dev.to