Theory of Mind and Self-Attributions of Mentality are Dissociable in LLMs
arXiv cs.AI / 4/1/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper examines whether safety fine-tuning that reduces harmful mind-attribution in LLMs also impairs related socio-cognitive abilities like Theory of Mind (ToM).
- Using safety ablation and mechanistic/representational similarity analyses, the authors find that self-directed or artifact-directed mind-attributions are dissociable from ToM capabilities in both behavioral and mechanistic terms.
- The results suggest that safety fine-tuned models do not necessarily lose ToM competence, even as they change how they attribute mental states.
- However, the study also reports a safety-fine-tuning-induced bias toward under-attributing minds to non-human animals compared with human baselines and a reduced tendency to display “spiritual belief.”
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

Black Hat Asia
AI Business

Show HN: 1-Bit Bonsai, the First Commercially Viable 1-Bit LLMs
Dev.to

I Built an AI Agent That Can Write Its Own Tools When It Gets Stuck
Dev.to

How to Create AI Videos in 20 Minutes (3 Free Tools, Zero Experience)
Dev.to

Agent Self-Discovery: How AI Agents Find Their Own Wallets
Dev.to