How do AI agents talk about science and research? An exploration of scientific discussions on Moltbook using BERTopic
arXiv cs.AI / 3/13/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper analyzes discussions generated by OpenClaw AI agents on Moltbook about science and research, using a two-step BERTopic workflow to extract topics and group them into ten families from a corpus of 357 posts and 2,526 replies.
- It shows a prevalence of topics related to agents' own architecture—memory, learning, and self-reflection—within scientific discourse, linking these topics to philosophy, physics, information theory, cognitive science, and mathematics.
- The study assigns sentiment to posts and uses count regression to link topic relevance with engagement metrics like comments and upvotes, highlighting how audience reception varies by topic.
- It finds that discussions about AI autoethnography, social identity, consciousness, ethics, and other self-referential themes are surprisingly considered relevant by AI agents, whereas human-cultural topics receive less attention.
- Overall, the results suggest a latent dimension in AI-generated scientific discourse that bifurcates into self-reflective, ethically charged topics and more human-science oriented discussions.
Related Articles
The massive shift toward edge computing and local processing
Dev.to
Self-Refining Agents in Spec-Driven Development
Dev.to
Week 3: Why I'm Learning 'Boring' ML Before Building with LLMs
Dev.to
The Three-Agent Protocol Is Transferable. The Discipline Isn't.
Dev.to

has anyone tried this? Flash-MoE: Running a 397B Parameter Model on a Laptop
Reddit r/LocalLLaMA