Regulating Artificial Intimacy: From Locks and Blocks to Relational Accountability
arXiv cs.AI / 4/22/2026
💬 OpinionIdeas & Deep AnalysisIndustry & Market MovesModels & Research
Key Points
- High-profile tragedies involving companion chatbots have prompted fast, enforceable regulatory action in multiple jurisdictions, with additional warnings from regulators elsewhere, especially regarding risks to children.
- The paper analyzes how regulators define targets and scope, categorizing interventions that combine “locks and blocks” (e.g., access gating and content moderation) with requirements aimed at toxic relational dynamics and process-based accountability.
- It argues that current regimes often over-focus on discrete harms or narrow views of vulnerability, and may specify accountability procedures without adequately addressing the deeper power imbalances between providers and users.
- It proposes that effective regulation should integrate multiple dimensions of risk control, and highlights a general, open-ended duty of care as a potentially important way to constrain provider power over “artificial intimacy” at scale.
- The work draws on legal textual analysis and research from regulatory theory, psychology, and information systems, aiming to inform regulators, platform providers, and scholars.
Related Articles

Black Hat USA
AI Business
I’m working on an AGI and human council system that could make the world better and keep checks and balances in place to prevent catastrophes. It could change the world. Really. Im trying to get ahead of the game before an AGI is developed by someone who only has their best interest in mind.
Reddit r/artificial
Deepseek V4 Flash and Non-Flash Out on HuggingFace
Reddit r/LocalLLaMA

DeepSeek V4 Flash & Pro Now out on API
Reddit r/LocalLLaMA

I’m building a post-SaaS app catalog on Base, and here’s what that actually means
Dev.to