Quoting Kyle Kingsbury

Simon Willison's Blog / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep Analysis

Key Points

  • The post quotes Kyle Kingsbury arguing that AI and ML accountability may create new “meat shield” roles tasked with overseeing automated systems, including moderation and legal-facing LLM outputs.
  • It suggests accountability could be internal (e.g., staff reviewing automated moderation decisions) or external (e.g., lawyers facing penalties for submitting false LLM claims in court).
  • Kingsbury’s view includes formal responsibility structures such as appointing roles like a Data Protection Officer to bear organizational liability.
  • He also raises the possibility that companies may use third-party subcontractors to absorb blame when systems fail, shifting consequences away from core developers.
  • The quotation is presented as a careers-and-AI/AI-ethics signal about how job functions and responsibility models may evolve alongside LLM-driven automation.
Sponsored by: Teleport — Connect agents to your infra in seconds with Teleport Beams. Built-in identity. Zero secrets. Get early access

15th April 2026

I think we will see some people employed (though perhaps not explicitly) as meat shields: people who are accountable for ML systems under their supervision. The accountability may be purely internal, as when Meta hires human beings to review the decisions of automated moderation systems. It may be external, as when lawyers are penalized for submitting LLM lies to the court. It may involve formalized responsibility, like a Data Protection Officer. It may be convenient for a company to have third-party subcontractors, like Buscaglia, who can be thrown under the bus when the system as a whole misbehaves.

Kyle Kingsbury, The Future of Everything is Lies, I Guess: New Jobs

Posted 15th April 2026 at 3:36 pm

This is a quotation collected by Simon Willison, posted on 15th April 2026.

careers 74 ai 1962 ai-ethics 290 kyle-kingsbury 3