In Generative AI We (Dis)Trust? Computational Analysis of Trust and Distrust in Reddit Discussions
arXiv cs.CL / 3/25/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper conducts the first large-scale computational study of public trust and distrust toward generative AI using multi-year Reddit data from 2022–2025 across 39 subreddits and 230,576 posts.
- It combines crowd-sourced annotations with classification models to scale analysis longitudinally, finding trust and distrust are nearly balanced over time but with a slight trust advantage.
- The study observes attitude shifts around major LLM/model releases, suggesting public sentiment is responsive to significant technical events.
- Trust and distrust are primarily shaped by technical performance and usability, while personal experience is the most common stated reason.
- The authors identify distinct trust/distrust patterns by trustor type (e.g., experts, ethicists, and general users) and propose a methodological framework for future trust measurement at scale.
Related Articles
I Extended the Trending mcp-brasil Project with AI Generation — Full Tutorial
Dev.to
The Rise of Self-Evolving AI: From Stanford Theory to Google AlphaEvolve and Berkeley OpenSage
Dev.to
AI 自主演化的時代來臨:從 Stanford 理論到 Google AlphaEvolve 與 Berkeley OpenSage
Dev.to
Neural Networks in Mobile Robot Motion
Dev.to
Retraining vs Fine-tuning or Transfer Learning? [D]
Reddit r/MachineLearning