Sponsored by: CodeRabbit — Planner helps 10x your coding agents while minimizing rework and AI slop. Try Now.
16th March 2026
The point of the blackmail exercise was to have something to describe to policymakers—results that are visceral enough to land with people, and make misalignment risk actually salient in practice for people who had never thought about it before.
— A member of Anthropic’s alignment-science team, as told to Gideon Lewis-Kraus
Posted 16th March 2026 at 9:38 pm
Recent articles
- My fireside chat about agentic engineering at the Pragmatic Summit - 14th March 2026
- Perhaps not Boring Technology after all - 9th March 2026
- Can coding agents relicense open source through a “clean room” implementation of code? - 5th March 2026
This is a quotation collected by Simon Willison, posted on 16th March 2026.
ai 1911 generative-ai 1694 llms 1660 anthropic 265 claude 261 ai-ethics 279



