Learning Dynamic Belief Graphs for Theory-of-mind Reasoning
arXiv cs.AI / 3/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces a dynamic belief graph approach for Theory-of-Mind reasoning in LLMs, modeling mental state as evolving over time rather than static beliefs.
- It couples latent belief inference with time-varying dependencies using a structured cognitive trajectory model and an energy-based factor graph.
- The method maps textual probabilistic statements into probabilistic graphical model updates and optimizes with an ELBO objective to capture belief accumulation and delayed decisions.
- Experiments on real-world disaster evacuation datasets show improved action prediction and interpretable belief trajectories, suggesting feasibility for augmenting LLMs with ToM in high-uncertainty settings.
Related Articles

Interactive Web Visualization of GPT-2
Reddit r/artificial
Stop Treating AI Interview Fraud Like a Proctoring Problem
Dev.to
[R] Causal self-attention as a probabilistic model over embeddings
Reddit r/MachineLearning
The 5 software development trends that actually matter in 2026 (and what they mean for your startup)
Dev.to
InVideo AI Review: Fast Finished
Dev.to