Hot take: AI's not going to kill open source code security
Cal.com considers AGPL a license to drill, but not everyone feels that way
Opinion Cal.com has closed its commercial codebase, abandoning years of AGPL-3.0 licensing in a move that has alarmed the developer community that helped build it and sent ripples through the broader open source world.
"Open source is dead," says Cal.com co-founder and CEO Bailey Pumfleet. But my conversations with top open source developers such as Linux kernel maintainer Greg Kroah-Hartman suggest it is not. And I really don't think it is.
Punfleet made this declaration because the company is moving its main program from the GNU Affero General Public License (AGPL) to a proprietary license, as he sees AI as too much of a threat to the program's security. Or, as he told me, "AI attackers are flaunting that transparency," so "Open source code is basically like handing out the blueprint to a bank vault. And now there are 100× more hackers studying the blueprint."
If that sounds familiar, it should. It's an ancient argument that letting people read your code automatically makes it more vulnerable. It wasn't true in the '90s; it's not true now. Consider, if you will, that almost all commercial code today is open source. If anything, open source has proven to be far more secure than proprietary code over the years.
Now it is true that AI makes finding security holes easier and faster than ever. In particular, everyone's nervous these days that the Anthropic Mythos Preview will drown the maintainers of smaller open-source projects in a flood of bug reports.
It's also true that some security reports, such as Black Duck's 2026 Open Source Security and Risk Analysis (OSSRA) paper, claim there's been a 107 percent surge in open source vulnerabilities per codebase. Indeed, lending support to Pumfleet's argument, Jason Schmitt, Black Duck's CEO, claims, "The pace at which software is created now exceeds the pace at which most organizations can secure it."
On the other hand, with AI, we can also hope to patch newly discovered security holes as they're found. Cal, clearly, doesn't want to take that chance. Or, perhaps, as he indicated, Pumfleet feels the company can't afford it.
For, as Drew Breunig, a well-regarded tech strategist, argued in a recent blog post, code security has now come to "a brutally simple equation: to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them."
In a way, this is a restating of Linus's Law. Today, instead of "given enough eyeballs, all bugs are shallow," perhaps it should be restated as "given enough tokens, all bugs are shallow." That presumes, of course, that you can afford enough tokens to stay ahead of your attackers.
Simon Willison, Django co-creator, however, argues, "Since security exploits can now be found by spending tokens, open source is MORE valuable because open source libraries can share that auditing budget while closed source software has to find all the exploits themselves in private."
Needless to say, some would-be competitors are making hay about Cal's sudden policy shift. Ryan Sipes, Mozilla Thunderbird Product & Business Development Manager, said on YComb: "Our scheduling tool, Thunderbird Appointment, will always be open source. Come talk to us and build with us. We'll help you replace Cal.com."
By and large, though, the developer community isn't buying Cal's story.
On Reddit, one person wondered how serious Cal has ever been about security. Citing several recent patches for security holes, he commented, "These problems were not the result of sophisticated hacking; they stemmed from fundamental oversights in authentication and access control."
One cynical comment in Slashdot stated, "If the tools are so good that you are afraid they will be used to expose your security flaws... maybe you should use the tools to find the security flaws yourself, and then fix them rather than declaring security through obscurity. This is a fig leaf over the desire to back out of the open-source community now that the product has reached profitability."
- Project Glasswing and open source software: The good, the bad, and the ugly
- AI bug reports went from junk to legit overnight, says Linux kernel czar
- AI slop got better, so now maintainers have more work
- Medical cannabis CTO says vendors would hang up when he called looking for a deal
Thinking of security by obscurity, Peter Steinberger, creator of OpenClaw, tweeted, "If you look at GPT 5.4-Cyber and its ability for closed source reverse engineering, I have bad news for you." In case you haven't looked at GPT 5.4-Cyber yet, OpenAI's answer for Mythos, OpenAI claims it can reverse engineer binaries to source code.
If it can deliver on that promise, you can kiss the always bogus "security by obscurity" argument goodbye for good. We'll finally get to see what's really inside Windows – and won't that be fun!. And, oh yes, dropping open source to improve your security will stop being a thing.
Mind you, to date, no other companies or projects have followed Cal's relicensing footsteps. I doubt any will.
Yes, AI is radically changing open source programming. I don't pretend to understand what open source coding will look like by this time next year. AI's transformation of programming is too broad for me to even make an educated guess. What I can say, though, is that we'll be better off learning how to use AI and open source together rather than retreating into old, discredited proprietary licensing models. ®




