Quoting Kyle Kingsbury

Simon Willison's Blog / 4/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The post is a collection of a quotation from Kyle Kingsbury about emerging accountability roles for machine-learning systems, including internal review personnel and external legal responsibility.
  • It argues that some people may function as “meat shields,” being formally or informally held accountable for ML/LLM outcomes even when systems are largely automated.
  • The quote cites examples such as Meta using human reviewers for automated moderation decisions and lawyers potentially being penalized for submitting LLM-generated “lies” in court.
  • It suggests organizations may also use third-party subcontractors to absorb blame when ML systems “misbehave,” shifting risk away from the primary developers or operators.
Sponsored by: Teleport — Connect agents to your infra in seconds with Teleport Beams. Built-in identity. Zero secrets. Get early access

15th April 2026

I think we will see some people employed (though perhaps not explicitly) as meat shields: people who are accountable for ML systems under their supervision. The accountability may be purely internal, as when Meta hires human beings to review the decisions of automated moderation systems. It may be external, as when lawyers are penalized for submitting LLM lies to the court. It may involve formalized responsibility, like a Data Protection Officer. It may be convenient for a company to have third-party subcontractors, like Buscaglia, who can be thrown under the bus when the system as a whole misbehaves.

Kyle Kingsbury, The Future of Everything is Lies, I Guess: New Jobs

Posted 15th April 2026 at 3:36 pm

This is a quotation collected by Simon Willison, posted on 15th April 2026.

careers 74 ai 1962 ai-ethics 290 kyle-kingsbury 3