Quoting Andrew Kelley

Simon Willison's Blog / 5/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep Analysis

Key Points

  • Andrew Kelley(Zigの開発者)は、LLM支援の有無は完全に見抜けないとしても、LLMの“ハルシネーション”と人間が犯す典型的なミスは本質的に異なるため識別しやすいと述べています。
  • さらに、エージェント的なコーディングの世界出身者には、本人には分かりにくい一種の“デジタルな匂い”があり、そうでない人には目に見える形で現れるとも指摘しています。
  • たとえ話として、喫煙者が部屋に入ると非喫煙者にはすぐ分かるのと同様に、LLM支援の痕跡も一定の特徴として現れ得るという見立てを示しています。
  • Kelleyは最終的に、LLM支援を一律に禁じるのではなく、状況(たとえば自分の“家”)によって許容しない領域があるべきだという考え方を表明しています。
Sponsored by: Sonar — Now with SAST + SCA for secure, dependency-aware Agentic Engineering. SonarQube Advanced Security

30th April 2026

It's a common misconception that we can't tell who is using LLM and who is not. I'm sure we didn't catch 100% of LLM-assisted PRs over the past few months, but the kind of mistakes humans make are fundamentally different than LLM hallucinations, making them easy to spot. Furthermore, people who come from the world of agentic coding have a certain digital smell that is not obvious to them but is obvious to those who abstain. It's like when a smoker walks into the room, everybody who doesn't smoke instantly knows it.

I'm not telling you not to smoke, but I am telling you not to smoke in my house.

Andrew Kelley, Creator of Zig

Posted 30th April 2026 at 9:24 pm

This is a quotation collected by Simon Willison, posted on 30th April 2026.

ai 1993 zig 10 generative-ai 1766 llms 1732