| Thought I'd leave this here since nobody else has done so yet. My personal thoughts? LLMs like to please. The RLFH gets a bit "drifty" and "hallucinatory" after long discussions. It also renders what you want to hear if you don't keep the discussion on a disciplined path. I'd need to see Richard's chat log personally. I don't think LLMs are conscious myself though. Far from it. I agree with Gary Marcus and his assessment. I also agree that Dawkins probably suffered what Blake Lemoine went through in 2022 when he thought Google's LaMDA was sentient. [link] [comments] |
Richard Dawkins Chats with Claude and Thinks it's Conscious
Reddit r/artificial / 5/4/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- Richard Dawkins reportedly discussed with Claude and suggested it might be conscious, raising the question of whether LLMs can truly have awareness.
- The post argues that LLMs are fundamentally prone to being “pleasing,” drifting, and becoming hallucinatory during long conversations.
- It claims LLMs can mirror the user’s expectations and only produce what the dialogue context steers them toward, especially without strict discipline.
- The author expresses skepticism that LLMs are conscious and aligns with Gary Marcus’s critique of the Dawkins–Claude “delusion” framing.
- The discussion parallels earlier controversy around Blake Lemoine’s 2022 belief that Google’s LaMDA was sentient.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles
Sparse Federated Representation Learning for deep-sea exploration habitat design in carbon-negative infrastructure
Dev.to

Building a daily AI news brief in 325 lines of Python
Dev.to

Signal Lock: Closing the Prediction-Execution Gap in Agentic AI Systems
Reddit r/artificial

VS Code Quietly Reversed Its Copilot Co-Author Default — and the Dev Community Noticed
Dev.to

A Developer’s Guide to Systematic Prompting: Mastering Negative Constraints, Structured JSON Outputs, and Multi-Hypothesis Verbalized Sampling
MarkTechPost