The First Real Counterattack

Dev.to / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisIndustry & Market Moves

Key Points

  • The article argues that frontier AI has collapsed the time between vulnerability discovery and exploitation, enabling attacks to progress in minutes rather than months.
  • It highlights an example where “Claude Mythos” reportedly found a 27-year-old OpenBSD high-severity zero-day in hours, illustrating AI-assisted security breakthrough potential.
  • Anthropic’s Project Glasswing is presented as the first serious attempt to counter this AI-driven security shift, with an emphasis on urgency for protecting critical infrastructure.
  • The piece states that Mythos Preview is not planned for public release because its cybersecurity capability is considered too dangerous to make broadly available.
  • The author frames Project Glasswing as a response to an uncomfortable industry reality: defenders have not kept pace with the increasing attack feasibility and cost scale of cyber incidents.

How Project Glasswing flips the AI security equation — and why it matters for every engineer alive

A 27-year-old bug was sitting in OpenBSD.

Not theoretical. Not a minor edge case. A high-severity zero-day, invisible to every security audit, every static analyzer, every fuzzer that had ever touched that codebase. For 27 years, it waited.

Claude Mythos found it in a matter of hours.

That single fact is all you need to understand why Anthropic just launched Project Glasswing — and why I think it's one of the most important things that happened in tech this year.

The problem nobody wanted to say out loud

For the last two years, the cybersecurity industry has been tiptoeing around an uncomfortable truth: AI is already better than almost any human at finding and exploiting vulnerabilities.

Not eventually. Now.

The window between vulnerability discovery and exploitation has collapsed. What used to take a skilled attacker months — reconnaissance, fuzzing, exploit development, chaining bugs — can now happen in minutes with the right model. CrowdStrike's CTO said it plainly at the Glasswing launch: "capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure."

We've known this was coming. We didn't have a plan.

The default response from the industry was to quietly not talk about it, keep building, and hope defenders would keep pace. They weren't keeping pace. Cyber attack costs are running around $500 billion a year globally, and that number was accelerating before frontier AI models entered the picture.

Project Glasswing is the first serious response.

What Mythos Preview actually is

Claude Mythos Preview is not a product. Anthropic has no plans to release it publicly — and the reason they give is direct: its cybersecurity capabilities make it too dangerous for general availability.

That's a sentence worth sitting with.

Anthropic built a model so capable at finding and understanding software vulnerabilities that they made a deliberate decision to keep it out of reach. Not because of regulatory pressure. Not because of PR. Because they ran the numbers and the asymmetry between attack and defense was too severe.

The benchmark tells the story: Mythos Preview scores 83.1% on CyberGym vulnerability reproduction tests. Claude Opus 4.6 — already one of the best models available — scores 66.6%. That's not an incremental improvement. That's a different category of capability.

In practice, it means:

  • A 27-year-old OpenBSD vulnerability that survived decades of audits
  • A 16-year-old FFmpeg bug that automated fuzzers hit 5 million times without catching
  • Autonomous exploit chains in the Linux kernel enabling privilege escalation
  • Zero-days across every major OS and every major browser

Five million automated hits. Zero catches. Mythos found it anyway.

This is the moment where deterministic security tooling hits its ceiling. Fuzzers, static analyzers, and symbolic execution work within the space of known patterns. Mythos reasons about code the way a senior security researcher does — contextually, creatively, following the logic of what could go wrong rather than what has gone wrong before.

The coalition: voluntary, industry-led, fast

Here's what I find most interesting about the structure of Glasswing: it's not a government program.

Amazon, Apple, Google, Microsoft, Nvidia, Cisco, CrowdStrike, JPMorgan, the Linux Foundation — 12 launch partners and 40+ additional organizations. No mandate. No regulatory requirement. No bureaucratic committee that spent 18 months drafting a framework.

Just a coalition of companies that looked at the threat landscape and decided that waiting for regulators to catch up was a losing strategy.

That's how it should work. Industry moving faster than policy, with enough transparency to be accountable. Within 90 days, Anthropic will publish a full report: vulnerabilities found, patches shipped, security improvements achieved. Public. Open. Reproducible.

The $100 million in model credits and $4 million in direct donations to open-source security organizations (OpenSSF, Apache Software Foundation) are real money going to the infrastructure that holds the internet together. The Linux kernel, FFmpeg, OpenBSD — these aren't niche tools. They run banks, hospitals, power grids.

Why this matters to every engineer reading this

I've spent years building software. Automated tests, CI/CD pipelines, security scans baked into the deploy process. I thought that was enough.

It's not enough anymore.

The attack surface of modern software is too large, too interconnected, and too old. We have code that's been running in production since before most of our current team members learned to code. Nobody fully understands it. Nobody has audited all of it. And now an attacker with a capable AI model can map that entire surface faster than your team can read the output.

Project Glasswing doesn't solve this alone. But it's the first time the defenders have access to the same class of capability as the attackers.

That matters. For years the asymmetry ran the other way — attackers needed to find one hole, defenders needed to protect everything. AI didn't change that asymmetry. It accelerated it. Until now.

What Glasswing establishes is a new precedent: frontier AI applied defensively, at scale, to real production software, with public accountability. If it works — and the early results suggest it does — it becomes the new baseline expectation for how serious organizations manage security.

In two years, "did you run this through an AI vulnerability analysis?" will be as standard as "did you write tests?"

The techno-optimist case

I'm not naive about dual-use. The same model that found that 27-year-old OpenBSD bug could, in the wrong hands, have weaponized it. Anthropic knows this. That's exactly why Mythos Preview is restricted.

But I'm also not willing to accept the pessimist framing that says AI in security is inherently destabilizing. That framing ignores something important: the bad actors don't wait for permission. They're already using every available model to find vulnerabilities. The question isn't whether AI changes the security equation — it already has. The question is whether the people building critical software will have access to equivalent tools.

Glasswing says: yes, they will.

That's the techno-optimist case. Not that technology solves everything automatically. But that when the people who understand the stakes make deliberate, coordinated decisions, technology can tip the balance toward the people trying to protect rather than the people trying to destroy.

The good guys finally have an AI that's faster than the attackers.

That's worth paying attention to.

Follow me on X: @crisesarmiento

Sources: