Anthropic's magic code-sniffer: More Swiss cheese than cheddar, for now

The Register / 4/27/2026

💬 OpinionSignals & Early TrendsIndustry & Market MovesModels & Research

Key Points

  • The article critiques Anthropic’s “magic code-sniffer” AI vulnerability-hunting approach by arguing it finds more of what it was trained to detect than what real-world issues look like.
  • It suggests the system’s results can be biased toward the patterns humans taught it, leaving gaps in coverage for unexpected or novel vulnerabilities.
  • The piece frames the current performance as “more Swiss cheese than cheddar,” implying meaningful but incomplete effectiveness.
  • Overall, it emphasizes that AI-based security tooling is helpful yet still limited and should not be trusted as a comprehensive substitute for rigorous human-led security practices.

Anthropic's magic code-sniffer: More Swiss cheese than cheddar, for now

AI vuln-hunter finds what humans taught it to find. Funny that

Mon 27 Apr 2026 // 08:30 UTC

Opinion In retrospect, calling it Mythos made it a hostage to fortune. Anthropic may have hoped that the name implied its AI code security model had mythical god-like powers, but there's an alternate reading. Another definition for Mythos is a set of beliefs of obscure origin which are incompatible with reality.

That reality is trickling in, and it’s looking less mythical, more typical. Mythos is a great tool that can automate a lot of the things expert humans do, and it’s the expert humans who get the most from it. It is very good at finding classes of vulnerability that humans know about, while not finding ones that they don’t. Training, amirite? Project Glasswing, limiting early use to trusted partners with a real need, is probably a responsible approach to using its powers for good, but other unrestricted models are quite good at this too. Some hype, some truth, LLMs gonna LLM.

Light at the end of the tunnel

Mythos found 271 Firefox flaws – but none a human couldn't spot

READ MORE

It is cynical to say the only real innovation is an AI company operating ethically. Equally cynical is seeing the closed roll-out and the attendant publicity as merely an exercise in hype. It is more constructive, arguably more accurate, and certainly more exciting, to take all this as an early glimpse of a better future. One where the threat landscape stops being a function of geological and climactic forces we can’t control, turning instead into one cultivated, controlled and gratifyingly anti-climactic.

Two propositions point the way. One is that the effectiveness of tools like Mythos will continue to evolve, exposing more and more structural and individual code flaws. The other, that these tools will inevitably become generally available. How quickly and cheaply may be controllable, but the outcome is inevitable. There are no long-term secrets in IT.

Right now, and for some time to come, most running code has been written in the pre-industrial age of vulnerability detection. Eyeballs, not AI balls, did the work. This is a bad public environment to dump roaming packs of implacable vuln-hunting robots. If they come too soon, it’ll be messy. And they are coming.

But if we survive that transition intact, then let the robots roam at will. There is one class of code that is guaranteed to present no security risks whatsoever, and that’s undeployed code. New code has a lot of problems, some caught before deployment and some that aren’t, but never an infinite number. Where truly excellent tools exist, code can be made truly excellent before release. It doesn’t matter if the same tools are available to the bad guys thereafter.

A good model, and cited often, is aviation safety. At the beginning of the jet age, new airliners had structural and mechanical faults that made them fall out of the sky. Over time, not only did design and material knowledge improve, but the engineering and regulatory disciplines evolved alongside. Now, we still have crashes, but they are inevitably traceable to things that could and should be done right, but weren't. There’s no new undiscovered class of failure waiting in the wings. It is highly unlikely that code is anything different — after all, we’ve been doing it precisely as long as we’ve been flying jets.

Just fixing code vulnerabilities doesn’t fix security, in the same way that knowing how to make and fly exquisitely safe aircraft stops fuel contamination, flocks of geese, or foolish humans from creasing the things. It does help immensely, though. Looking at exploits based on long chains of known and unknown vulns shows how flakey code can be, but it also shows how removing just one of those bugs shuts down the entire attack. The Swiss cheese model of failure works less and less well the more the cheese tends to cheddar.

As for the holes outside the code, the supply chain exploits, the special engineering, the straightforward inside sabotage job, to the extent that we can encode, model and train on them, they too will be amenable to the inexhaustible patience of the inference engines. And while huge swathes of enterprise infrastructure continue to run old, unpatched or misconfigured systems, it’ll be like flying on aircraft from the Age of Death. There’s no IT equivalent of the FAA with the power to ground that which should never be flying, much as that would be a fun counter-factual.

Anthropic

Anthropic tests how devs react to yanking Claude Code from Pro plan

READ MORE

This too shall pass. There is no way that a tool which catches vulnerabilities by the hundred does not make old code safer, new code so much more so. It will be most interesting to see how the tools for finding flaws evolve alongside the techniques for designing, factoring and writing code for inherent strength. Nobody should expect the way things are now to be the most efficient, least expensive way there is. Nor should anyone expect human expertise to fall out of use. The fact that so many aviation safety issues revolve around human failure shows how intrinsic humans still are in design, construction, maintenance and operation aloft.

Let computers do what computers are good at, let humans do what humans are good at. Old but true. We know from decades of digital life that humans aren’t so good at security, and that computers aren’t so hot at it either. In another old saying — give us the tools and we can finish the job. Mythos isn’t a tool that can let us do that, not yet. AI in general seems determined to make things worse.

Now, at last, we can see a path forward, a different way of doing things that is likely to actually happen. What was a threat landscape can become a garden where good things grow. That’s no myth, that’s the future. ®

More like these
×

Narrower topics

More about

More like these
×

Narrower topics

TIP US OFF

Send us news