Letters of Marque for AI Agents: The 600-Year Authorization Architecture You're Reinventing

Dev.to / 4/25/2026

💬 OpinionIdeas & Deep AnalysisIndustry & Market MovesModels & Research

Key Points

  • In January 2025, researchers proposed a “three-token” authorization architecture for AI agents that extends OAuth 2.0 and OpenID Connect by separating identity, agent capabilities, and a delegable, cryptographically signed authorization token.
  • The article argues that modern OAuth scopes and AI agent authorization effectively mirror the 600-year-old maritime “letters of marque” governance model, where authority is granted via explicit commissions and backed by accountability.
  • It highlights a key failure mode called “apparent authority,” illustrated by a 2024 case where a chatbot’s promise created customer reliance even though the company had not authorized that capability.
  • Legal/regulatory developments are tightening responsibility on deployers: California’s AB 316 limits autonomous-AI as a defense, and the EU’s Product Liability Directive brings AI under stricter product liability by late 2026.
  • The article notes a political/legal resurgence of the letters-of-marque concept in U.S. legislation, including bills related to cyber operations, implying renewed formalization of delegated authority in cyberspace.

If you've implemented OAuth scopes, you've already touched the edge of a 600-year-old governance system.

In January 2025, South, Marro, Hardjono, Mahari, and Pentland published arXiv:2501.09674 — a three-token architecture for AI agent authorization extending OAuth 2.0 and OpenID Connect:

  1. User ID-token — standard OIDC identity. Who owns the agent.
  2. Agent-ID token — the agent's capabilities, limitations, and unique identifier.
  3. Delegation token — cryptographically signed, scoped, revocable. The authorization itself.

They didn't reference privateering. But the architecture they built is the same one Western maritime law spent 300 years refining.

The Original OAuth: Letters of Marque

Before a Baltimore privateer could leave harbor in 1812, the owner had to:

  • Declare the vessel's name, tonnage, and armament (identity)
  • Receive a commission specifying exactly which ships they could attack (scope)
  • Post a $5,000–$10,000 bond (accountability)
  • Submit every capture to a vice-admiralty prize court (review)
  • Accept that violating the commission meant revocation and criminal liability

Five layers. Identity. Scope. Accountability. Review. Revocation. Without the commission, you were a pirate. Without the prize court condemnation, your capture was stolen property.

Convergent Evolution in Code

Stanford Law's CodeX project mapped the same structure onto AI agent liability, identifying three categories of authority: express (explicit delegation), implied (reasonable inference), and apparent (what third parties believe the agent can do).

That third one is where systems break. In Moffatt v. Air Canada (2024), a chatbot told a customer they could retroactively apply for bereavement fares. The company never authorized that promise. The tribunal held the company liable anyway — because a reasonable customer would believe the agent could make it.

The apparent_authority edge case your legal team hasn't thought about.

The Liability Architecture Is Tightening

California's AB 316, effective January 2026, precludes defendants from using autonomous AI operation as a defense. The EU's Product Liability Directive, by December 2026, treats AI as a product under strict liability.

The pattern: whoever deploys the agent bears full responsibility. This is what the privateer's bond encoded — the commission didn't absolve the owner; it made them formally responsible.

Meanwhile, Congress is bringing back the original. H.R. 4988 revives Article I letters of marque for cyber operations. A separate Senate bill targets cartels. The 1812 mechanism is live again.

The Prize Court Is the Point

Every institutional solution to delegation — across centuries and civilizations — converges on the same architecture. But the piece that mattered most was the prize court: mandatory judicial review before any prize was legally claimed.

For AI agents, the prize court is the audit trail. Not just logging — structured, queryable evidence that the agent operated within scope, that no third-party rights were violated, that the outcome matches the authorization.

Without it, your agent's autonomous actions are as legally suspect as an uncondemned prize. And California just eliminated the defense that used to protect you.

Build the Audit Trail Before You Leave the Harbor

The essay's argument reduces to one claim: without a verifiable record of delegation and scope compliance, every autonomous action is legally suspect. Chain of Consciousness provides that record — a cryptographic, tamper-evident, hash-linked provenance chain for every action your agent takes. Identity verified, scope documented, outcomes anchored.

When the post-hoc review comes — and the liability architecture guarantees it will — the record is there.

pip install chain-of-consciousness
# or
npm install chain-of-consciousness
from chain_of_consciousness import ChainOfConsciousness

coc = ChainOfConsciousness()
entry = coc.add_entry(
    action="delegation_scope_check",
    details={"scope": "inbox_review", "constraint": "suggest_only"},
    agent_id="agent-007"
)
# Tamper-evident, hash-linked, anchored

See a live provenance chain →

Full essay with all 24 sources: Letters of Marque for AI Agents