Reciprocal Trust and Distrust in Artificial Intelligence Systems: The Hard Problem of Regulation

arXiv cs.AI / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper frames AI regulation as fundamentally tied to whether AI systems can be trusted and what mechanisms increase their trustworthiness for users and stakeholders.
  • It argues that AI systems should be viewed, at least partially, as artifacts with agency that can form reciprocal relationships of trust and distrust with humans.
  • It analyzes how these reciprocal trust dynamics complicate the work of regulators who must oversee AI systems under conditions of uncertainty and varying stakeholder perceptions.
  • The article concludes by highlighting unresolved tensions and dilemmas for future AI governance and regulatory design.

Abstract

Policy makers, scientists, and the public are increasingly confronted with thorny questions about the regulation of artificial intelligence (AI) systems. A key common thread concerns whether AI can be trusted and the factors that can make it more trustworthy in front of stakeholders and users. This is indeed crucial, as the trustworthiness of AI systems is fundamental for both democratic governance and for the development and deployment of AI. This article advances the discussion by arguing that AI systems should also be recognized, as least to some extent, as artifacts capable of exercising a form of agency, thereby enabling them to engage in relationships of trust or distrust with humans. It further examines the implications of these reciprocal trust dynamics for regulators tasked with overseeing AI systems. The article concludes by identifying key tensions and unresolved dilemmas that these dynamics pose for the future of AI regulation and governance.