AI Identity: Standards, Gaps, and Research Directions for AI Agents

arXiv cs.AI / 4/28/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that as AI agents carry out real, cross-boundary transactions and workflows without continuous human supervision, existing infrastructure cannot adequately handle the problem of identifying, verifying, and holding such agents accountable.
  • It defines “AI Identity” as a continuous match—within confidence bounds—between what an agent is declared to be and what it is observed to do over time.
  • A structured comparison shows fundamental asymmetries between human and AI identity across substrate, persistence, verifiability, and legal standing, implying that directly extending human identity frameworks will cause systematic failures.
  • The authors evaluate current technical and regulatory documents and conclude none sufficiently meet the governance needs of nondeterministic, boundary-crossing autonomous agents.
  • They identify five structural gaps—intent verification, recursive delegation accountability, agent identity integrity, governance opacity/enforcement, and operational sustainability—stating that additional engineering alone will not close them and that foundational research is required.

Abstract

AI agents are now running real transactions, workflows, and sub-agent chains across organizational boundaries without continuous human supervision. This creates a problem no current infrastructure is equipped to solve: how do you identify, verify, and hold accountable an entity with no body, no persistent memory, and no legal standing? We define AI Identity as the continuous relationship between what an AI agent is declared to be and what it is observed to do, bounded by the confidence that those two things correspond at any given moment. Through a structured survey of industry trends, emerging standards, and technical literature, we conduct a gap analysis across the full agent identity lifecycle and make three contributions: (1) a structural comparison of human and AI identity across four dimensions (substrate, persistence, verifiability, and legal standing) showing that the asymmetry is fundamental and that extending human frameworks to agents without structural modification produces systematic failures; (2) an evaluation of current technical and regulatory documents against the identity requirements of autonomous agents, finding that none adequately address the challenge of governing nondeterministic, boundary-crossing entities; and (3) identification of five critical gaps (semantic intent verification, recursive delegation accountability, agent identity integrity, governance opacity and enforcement, and operational sustainability) that no current technology or regulatory instrument resolves. These gaps are structural; more engineering effort alone will not close them. Foundational research on AI identity is the central conclusion of this report.