Project Glasswing and open source software: The good, the bad, and the ugly

The Register / 4/10/2026

💬 OpinionIdeas & Deep Analysis

Key Points

  • The article argues that “Project Glasswing” and related AI-driven security efforts are flooding open-source maintainers with newly discovered vulnerabilities, which can overwhelm teams and processes.
  • It highlights potential upsides of AI-based vulnerability discovery for improving coverage and surfacing issues faster than purely manual workflows.
  • It also critiques downsides such as alert fatigue, variable false-positive rates, and the added triage and remediation burden on FOSS developers.
  • The piece frames the overall impact as mixed: while the discoveries can be valuable, the current delivery and operational mechanics may be harmful without better prioritization and support for maintainers.

Project Glasswing and open source software: The good, the bad, and the ugly

Just what FOSS developers need – a flood of AI-discovered vulnerabilities

Fri 10 Apr 2026 // 11:30 UTC

Opinion Anthropic describes Project Glasswing as a coalition of tech giants committing $100 million in AI resources to hunt down and fix long-hidden vulnerabilities in critical open source software that it's finding with its new Mythos AI program. Or as The Reg put it, "an AI model that can generate zero-day vulnerabilities."

Oh boy! Just what we needed. Not just AI security bug slop, but automated, dedicated AI security bug slop!

While Anthropic claims its Claude Opus 4.6 can barely find zero-days, Mythos Preview can pop up working exploits 72.4 percent of the time. It's a good thing Anthropic has limited its use for now; if it lives up to its hype, Mythos would crash the internet in a day.

Project Glasswing is generously offering free access to Mythos Preview, which Anthropic claims "surpasses all but the most skilled humans at finding and exploiting software vulnerabilities." Free with $100 million in usage credits for Mythos Preview and $4 million in direct donations to open source security organizations. Is that enough money to secure open source software, 97 percent of all working software? I doubt it.

Can we believe in Anthropic? The company claims it has found a 27-year-old bug in OpenBSD, a 16-year-old vulnerability in FFmpeg's video encoding code, and a new set of chained exploits in the Linux kernel that enable an attacker to escalate from ordinary user access to complete root control.

I'm not impressed by that. I got my start in programming by finding bugs myself, and I was never any great shakes as a developer. On the other hand, as long-term Linux kernel maintainer Greg Kroah-Hartman told us recently, AI security bug reports suddenly went from slop to useful.

OK, let's say that even in early beta, Mythos is that good at finding bugs. What will that mean? Well, next we need someone to fix those bugs. Who's going to bell that cat?

So I asked people who are a lot smarter than I am about software security and open source software, and this is what they told me.

First, I contacted Daniel Stenberg, founder and lead developer of cURL, where AI slop reports caused his team to stop paying bug bounties. He told The Register: "Yeah, this risk adds more load on countless open source maintainers already struggling." There's the rub.

Stenberg agreed that "AI reporting has gotten a lot better over the last few months. The frequency of old-style, really stupid AI slop reports has gone down significantly." However, lots of those are still not vulnerabilities but end up being "just bugs," and the reports tend not to come with fixes or solutions, so even if we like getting bugs reported, getting a lot of them as security reports adds a significant load.

So even if Mythos is "close to being as good as they claim in their marketing, I figure we will see the maintainer load go up even more soon. As I've pointed out time and again, there are never enough maintainers or financial support for open source projects."

Can't AI itself help? Sure. Dirk Hohndel, Verizon's senior director of open source, posted on LinkedIn that while AI coding tools aren't yet ready to maintain code, he believes they will be soon. "This is almost possible today. And at the rate of improvement these tools have seen over the last couple of quarters, I am convinced that it will be possible with acceptable results at some point this year."

However, Stenberg concluded that, so far, AIs typically aren't nearly as good at fixing the problems as they are at finding them, which also adds to the imbalance: several monster-sized companies and armies of users of their tools fill the inboxes of the far fewer and far less resourced open source projects. So even when those reports are good, this is a burden.

Dan Lorenc, CEO and co-founder of the security company Chainguard, agreed. He said: "I think Glasswing is exciting, and a careful rollout like this is a responsible way to get these capabilities into the hands of people trying to use them for good. At the same time, projects and enterprises using them probably aren't ready for the influx of real vulnerabilities and patches they're going to need to get out quickly."

Lorenc warned: "It's only a matter of time before others get similarly powerful models out, so everyone is going to have to prepare for an onslaught of work very soon. People can't keep pretending this isn't real or coming."

I then checked in with David Wheeler, director of Open Source Supply Chain Security at the Linux Foundation (LF). The LF, by the way, is one of the groups supporting Glasswing. Wheeler said: "Anthropic is pitching not just 'find' but 'scan and secure.' That is, they're using AI not only to find vulnerabilities, but also to create fixes for them. I think that's key; a good proposed fix makes the report much easier to act on, and it makes it much clearer what the purported vulnerability is."

We'll soon see how good Anthropic is at finding and fixing.

I'm also worried about another issue. Mythos is proprietary software. Oh sure, we all had a look at Anthropic's Claude code, but as Anthropic's lawyers will tell you in big red letters, their code is not open source. So even if Mythos turns out to be the greatest thing in programming since the compiler was invented, doesn't that mean open source software will be locked into a proprietary solution? The very idea gives me the creeps.

Wheeler replied: "Is there a risk of lock-in? Yes, that's always a risk. That said, I don't think the risk is as bad and we're working on ways to address this."

"First: even if the tool is only available for a period of time, if the tool can help us find and eliminate vulnerabilities, that's still a good thing. Software is finite; it has a finite number of defects, and some security defects are more important than others. The more we can eliminate the vulnerabilities, the fewer that can be exploited, even if the service ends or becomes too expensive."

"That said, we do worry about the lock-in. We are also interested in solutions. After all, the new open source software cyber reasoning system (OSS-CRS) emerged from AIxCC and is a standard orchestration framework for building and running LLM-based autonomous bug-finding and bug-fixing systems."

In particular, "OSS-CRS defines a unified interface for CRS development. Build your CRS once by following the development guide, and run it across different environments (local, Azure...) without any modification. We're encouraging people building CRSs to use interfaces like this so they aren't as subject to lock-in. OSS-CRS also makes it easy to run an ensemble (a set of these tools). OSS-CRS does other things, but that hopefully shows that there are ways to mitigate the risk."

Well, we'll see. Personally, I'd be a lot happier if Mythos were open source software. Almost all AI software is, at its roots, based on open source.

That said, we're at an inflection point in AI and software development. Things are changing radically. I have to agree with LF CEO Jim Zemlin, who stated: "The urgency is real. We are in the most dangerous period, the transition, when attackers might gain a significant advantage as the technology ecosystem digests the impact of AI. We have already seen evidence of what smart cybersecurity crews can do when leveraging AI, and witnessed in-the-wild novel exploit kits written with AI assistance. Falling behind is not an option."

All true, but once more, and with feeling, I really, really wish the answer was written in open source code. ®

More like these
×

Narrower topics

Broader topics

More about

More like these
×

Narrower topics

Broader topics

TIP US OFF

Send us news