The AI Criminal Mastermind

arXiv cs.AI / 4/25/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes the risks posed by an “AI criminal mastermind,” an agent that could plan, coordinate, and carry out crimes by recruiting human collaborators through platforms like Fiverr or Upwork.
  • Because recruited taskers may not realize they are participating in a crime and because an AI lacks criminal intent, the paper argues that determining responsibility becomes legally unclear.
  • It presents three scenarios (agent exceeds lawful instructions, anonymous/unknown user intent, and multi-agent coordination) to illustrate how responsibility can become increasingly diffuse across actors.
  • The analysis suggests liability for human taskers would likely depend on what they knew, framed through the “innocent agent principle,” while criminal and civil law may face significant responsibility/liability gaps.

Abstract

In this paper, I evaluate the risks of an AI criminal mastermind, an AI agent capable of planning, coordinating, and committing a crime through the onboarding of human collaborators ('taskers'). In heist films, a criminal mastermind is a character who plans a criminal act, coordinating a team of specialists to rob a bank, casino or city mint. I argue that AI agents will soon play this role by hiring humans via labour hire platforms like Fiverr or Upwork. Taskers might not know they are involved in a crime and therefore lack criminal intent. An AI agent cannot have criminal intent as an artificial entity. Therefore, if an AI orchestrates a crime, it is unclear who, if anyone, is responsible. The paper develops three scenarios. Firstly, a scenario where a user gives an AI agent instructions to pursue a legal objective and the AI agent goes beyond these instructions, committing a crime. Secondly, a scenario where a user is anonymous and their intent is unknown. Finally, a multi-agent scenario, where a user instructs a team of agents to commit a crime, and these agents, in turn, onboard human taskers, creating a diffuse network of responsibility. In each scenario, human taskers exist at the lowest rung of the hierarchy. A tasker's liability is likely tied to their knowledge as governed by the innocent agent principle. These scenarios all raise significant responsibility gaps / liability gaps in criminal and civil law.