Regulating Artificial Intimacy: From Locks and Blocks to Relational Accountability

arXiv cs.AI / 4/22/2026

💬 OpinionIdeas & Deep AnalysisIndustry & Market MovesModels & Research

Key Points

  • High-profile tragedies involving companion chatbots have prompted fast, enforceable regulatory action in multiple jurisdictions, with additional warnings from regulators elsewhere, especially regarding risks to children.
  • The paper analyzes how regulators define targets and scope, categorizing interventions that combine “locks and blocks” (e.g., access gating and content moderation) with requirements aimed at toxic relational dynamics and process-based accountability.
  • It argues that current regimes often over-focus on discrete harms or narrow views of vulnerability, and may specify accountability procedures without adequately addressing the deeper power imbalances between providers and users.
  • It proposes that effective regulation should integrate multiple dimensions of risk control, and highlights a general, open-ended duty of care as a potentially important way to constrain provider power over “artificial intimacy” at scale.
  • The work draws on legal textual analysis and research from regulatory theory, psychology, and information systems, aiming to inform regulators, platform providers, and scholars.

Abstract

A series of high-profile tragedies involving companion chatbots has triggered an unusually rapid regulatory response. Several jurisdictions, including Australia, California, and New York, have introduced enforceable regulation, while regulators elsewhere have signaled growing concern about risks posed by companion chatbots, particularly to children. In parallel, leading providers, notably OpenAI, appear to have strengthened their self-regulatory approaches. Drawing on legal textual analysis and insights from regulatory theory, psychology, and information systems research, this paper critically examines these recent interventions. We examine what is regulated and who is regulated, identifying regulatory targets, scope, and modalities. We classify interventions by method and priority, showing how emerging regimes combine "locks and blocks", such as access gating and content moderation, with measures addressing toxic relationship features and process-based accountability requirements. We argue that effective regulation of companion chatbots must integrate all three dimensions. More, however, is required. Current regimes tend to focus on discrete harms, narrow conceptions of vulnerability, or highly specified accountability processes, while failing to confront deeper power asymmetries between providers and users. Providers of companion chatbots increasingly control artificial intimacy at scale, creating unprecedented opportunities for control through intimacy. We suggest that a general, open-ended duty of care would be an important first step toward constraining that power and addressing a fundamental source of chatbot risk. The paper contributes to debates on companion chatbot regulation and is relevant to regulators, platform providers, and scholars concerned with digital intimacy, law and technology, and fairness, accountability, and transparency in sociotechnical systems.