AI bug reports went from junk to legit overnight, says Linux kernel czar

The Register / 3/26/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical Usage

Key Points

  • Linux kernel maintainer Greg Kroah-Hartman says AI-generated bug reports have recently improved from low-quality “junk” to genuinely useful submissions.
  • He notes he cannot precisely explain the sudden quality inflection point, but observes that the trend is continuing rather than reversing.
  • The shift suggests that AI-assisted workflows for identifying and reporting kernel issues may be becoming more reliable in practice.
  • The article implies ongoing momentum for using AI to support infrastructure and maintenance processes, despite the underlying cause remaining unclear.

AI bug reports went from junk to legit overnight, says Linux kernel czar

Greg Kroah-Hartman can't explain the inflection point, but it's not slowing down or going away

Thu 26 Mar 2026 // 13:40 UTC

Interview I was at a press luncheon at KubeCon Europe this week when, to my surprise, who should sit down next to me but long-term Linux kernel maintainer Greg Kroah-Hartman. Greg, who lives in the Netherlands these days, was there to briefly comment on AI, Linux, and security. We spoke about how, over the last month, AI-driven activity around Linux security and code review has "really jumped" in a way no one in the open source world saw coming.

"Months ago, we were getting what we called 'AI slop,' AI-generated security reports that were obviously wrong or low quality," he said. "It was kind of funny. It didn't really worry us." Of course, there are many Linux kernel maintainers, so for them, AI slop isn't as burdensome as it is for, say, Daniel Stenberg, founder and lead developer of cURL, where AI slop reports caused the cURL team to stop paying bug bounties.

Linus Torvalds

Linus Torvalds and friends tell The Reg how Linux solo act became a global jam session

READ MORE

Things have changed, Kroah-Hartman said. "Something happened a month ago, and the world switched. Now we have real reports." It's not just Linux, he continued. "All open source projects have real reports that are made with AI, but they're good, and they're real." Security teams across major open source projects talk informally and frequently, he noted, and everyone is seeing the same shift. "All open source security teams are hitting this right now."

No one is quite sure what's behind it. Asked what changed, Kroah-Hartman was blunt: "We don't know. Nobody seems to know why. Either a lot more tools got a lot better, or people started going, 'Hey, let's start looking at this.' It seems like lots of different groups, different companies." What is clear is the scale. "For the kernel, we can handle it," he said.

"We're a much larger team, very distributed, and our increase is real – and it's not slowing down. These are tiny things, they're not major things, but we need help on this for all the open source projects." Smaller projects, he implied, have far less capacity to absorb a sudden flood of plausible AI-generated bug reports and security findings – at least now they're real bugs and not garbage ones.

Behind the scenes, security teams are comparing notes. "We get together informally and talk a lot, because we all have the same problems," he said. "There must have been some inflection point somewhere with the tools. Did the local tools get better? Did people figure out something? I honestly don't know."

For now, AI is showing up more as a reviewer and assistant than as a full author of Linux kernel code, but that line is starting to blur. Kroah-Hartman has already done his own experiments with AI-generated patches.

"I did a really stupid prompt," he recounted. "I said, 'Give me this,' and it spit out 60: 'Here's 60 problems I found, and here's the fixes for them.' About one-third were wrong, but they still pointed out a relatively real problem, and two-thirds of the patches were right." Mind you, those working patches still needed human cleanup, better changelogs, and integration work, but they were far from useless. "The tools are good," he said. "We can't ignore this stuff. It's coming up, and it's getting better."

Developers are starting to acknowledge AI's role in actual submissions. "We're seeing some patches being generated," Kroah-Hartman said. "You have a little co-develop tag for that now. We're seeing some things for some new features, but we're seeing AI mostly being used in the review."

Asked whether he could imagine a near-future where most of the work on simple changes comes from AI, he said that for "simple little error conditions, properly detecting error conditions," AI could already generate dozens of usable patches today.

The sudden increase in AI-generated reports and AI-assisted work has also spurred a parallel push to build AI into the kernel's own review infrastructure. A key piece of that is Sashiko, a tool originally developed at Google and now donated to the Linux Foundation.

"We need to be able to have an easy way to review some of these patches that come in ways that cut down on our load." The tool is "out there, running on almost all kernel patches," he said. "You can see it publicly. We're integrating it into our review tools. It's available for anybody to use."

That work builds on earlier efforts inside specific subsystems. "The networking and the BPF people have been doing LLM-generated reviews for a while," said Kroah-Hartman. "The Direct Rendering Manager (DRM) people and now Google's tool are pulling all those into one common interface," he explained. "Different subsystems are adding better skills or prompts – for storage, here are the things you need to look for; for graphics, here are the things you need to look for. People are contributing in a public place for that, which is how it should be. This is very good."

Kroah-Hartman credited longtime kernel developer Chris Mason, now at Meta, with pioneering AI-based review workflows. Mason has been running AI review for eBPF and networking for some time. The systemd project is also using the same class of tools for its all-C codebase.

AI reviewers, he stressed, are additive rather than authoritative. "On the review side, it's generating some good reviews. It doesn't get you everything. Some things are still wrong. But it does point out a lot of the obvious things," he said.

One of the biggest immediate wins is turnaround time. When an AI reviewer flags obvious problems, submitters get feedback long before a human maintainer would realistically read the patch. "If I see it respond to something, it gives feedback to the submitter faster than the maintainer had a chance to, which is nice," Kroah-Hartman said. "We have a number of bots that run on patches as it is. If I see those fail, I just know I don't even need to look at that as a maintainer. And it gives the developer, 'Oh, I can go do another version tomorrow,' which helps increase the feedback a little better."

Still, as AI-generated reports and patches grow, so does the review burden. "It's more reviews; it's more stuff we have to review for the kernel," he said. That's why efforts with the OpenSSF and its Alpha-Omega program matter. "We're working to try and create tools to help make it easier for maintainers to handle this incoming feed and deal with it."

A recurring theme for Kroah-Hartman is equity of access. Until recently, only well-resourced subsystems could afford to run heavy AI tooling at scale. Turning Google's review system into a Linux Foundation project is meant to change that.

"That's this one tool that we have for the review," he said. "It's one tool as an example of how now, as an LF project, we're giving access to everybody. Before, it was just the subsystems that had the resources to run it on the back end. Right now, we're giving it to everyone." Work is already underway to make it usable beyond the kernel's own infrastructure.

That matters because, as Kroah-Hartman keeps emphasizing, the AI wave is not just a kernel problem. "All open source projects have real reports that are made with AI," he said. "Our increase is real, and it's not slowing down. These aren't major things, but we need help on this for all the open source projects."

For Linux, the relationship with AI is already evolving past theory and into practice. It's a mixed blessing. AI is simultaneously a new source of real vulnerabilities that strains human reviewers who must deal with them, while also helping to manage that strain.

The trick for Kroah-Hartman and his peers will be to keep AI as a force multiplier, without drowning the open source maintainers. ®

More about

TIP US OFF

Send us news