Recognition Without Authorization: LLMs and the Moral Order of Online Advice
arXiv cs.CL / 4/27/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies how assistant-style LLMs’ “advisory defaults” interact with the tightly codified moral norms of online communities, using r/relationship_advice as a vote-ratified reference point.
- Across four LLMs evaluated on 11,565 subreddit posts, models often recognize the same underlying dynamics as human commenters but are substantially less likely to translate that recognition into action-authorizing directives.
- The discrepancy is largest on high-consensus posts involving abuse or safety threats, where the models recommend “exit” at about half the rate of human advice while still using strong hedging, validation, and therapeutic framing.
- The authors propose the pattern “recognition without authorization,” arguing it is structural—driven by portable, risk-averse, weakly directive assistant norms—potentially influenced by safety alignment, training-data averaging, and assistant design choices.
- The work reframes model divergence as a lens on how standardized assistant behaviors flatten when they encounter context-specific moral orders, rather than as a purely technical error.
Related Articles

Legal Insight Transformation: 7 Mistakes to Avoid When Adopting AI Tools
Dev.to

Legal Insight Transformation: Traditional vs. AI-Driven Research Compared
Dev.to

Legal Insight Transformation: A Beginner's Guide to Modern Research
Dev.to
I tested the same prompt across multiple AI models… the differences surprised me
Reddit r/artificial

The five loops between AI coding and AI engineering
Dev.to