Using AI to code does not mean your code is more secure

The Register / 3/27/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • The article argues that using AI coding assistants has not made software inherently more secure; security ultimately depends on validation, review, and engineering practices.
  • It notes that as adoption of AI-generated code has surged, reported vulnerabilities associated with that code have also increased.
  • The piece highlights a risk that AI-assisted development can produce insecure patterns or defects that may be overlooked without proper testing and security guidance.
  • It implies organizations should treat AI code as a productivity aid rather than a security guarantee, reinforcing the need for secure coding reviews, static analysis, and testing.
  • The overall takeaway is that security outcomes require deliberate processes regardless of whether code is written or suggested by humans or AI tools.

Using AI to code does not mean your code is more secure

Use of AI coding assistants has surged, but so has the number of vulnerabilities in AI-generated code

Thu 26 Mar 2026 // 19:38 UTC

As more people use AI tools to write code, the tools themselves are introducing more vulnerabilities.

Researchers affiliated with Georgia Tech SSLab have been tracking CVEs attributable to flaws in AI-generated code

Last August, they found just two CVEs that could be definitively linked to Claude Code – CVE-2025-55526, a 9.1 severity directory traversal vulnerability in n8n-workflows, and GHSA-3j63-5h8p-gf7c, an improper input handling bug in the x402 SDK.

In March, they identified 35 CVEs – 27 of which were authored by Claude Code, 4 by GitHub Copilot, 2 by Devin, and 1 each by Aether and Cursor.

Claude Code's overrepresentation appears to follow from its recent surge in popularity. In the past 90 days, Claude Code has added more than 30.7 billion lines of code to public repositories, according to Claude's Code, an analytics website created by software engineer Jodan Alberts.

The Georgia Tech researchers started their measurements on May 1, 2025, and as of March 20, 2026, the CVE scorecard reads

  • 49 for Claude Code (11 critical)
  • 15 for GitHub Copilot (2 critical)
  • 2 for Aether
  • 2 for Google Jules (1 critical)
  • 2 for Devin
  • 2 for Cursor
  • 1 for Atlassian Rovo
  • 1 for Roo Code

That's 74 CVEs attributable to AI-authored code out of 43,849 advisories analyzed.

Hanqing Zhao, a researcher with the Georgia Tech SSLab, told The Register in an email that those AI CVEs could be viewed as a lower bound and not as a ratio.

"Those 74 cases are confirmed instances where we found clear evidence that AI-generated code contributed to the vulnerability," he said. "That does not mean the other ~50,000 cases were human-written. It means we could not detect AI involvement in those cases.

"Take OpenClaw as an example. It has more than 300 security advisories and appears to have been heavily vibe-coded, but most AI traces have been stripped away. We can only confidently confirm around 20 cases with clear AI signals. Based on projects like that, we estimate the real number is likely 5 to 10 times higher than what we currently detect."

Zhao said the CVE count should not be read as a sign that AI code tools deliver more secure code just because it's low.

"Claude Code alone now appears in more than 4 percent of public commits on GitHub," he explained. "If AI were truly responsible for only 74 out of 50,000 public vulnerabilities, that would imply AI-generated code is orders of magnitude safer than human-written code. We do not think that is credible."

The low number, he said, "reflects detection blind spots, not superior AI code quality."

The Georgia Tech findings amplify research published in November 2024 by Georgetown University's Center for Security and Emerging Technology.

Based on tests of GPT-3.5-turbo, GPT-4, Code Llama 7B Instruct, WizardCoder 7B, and Mistral 7B Instruct, the Georgetown researchers found, "Across all five models, approximately 48 percent of all generated code snippets were compilable but contained a bug that was flagged by ESBMC [the Efficient SMT-based Context-Bounded Model Checker], which we define as insecure code."

About 30 percent of the generated code snippets passed ESMBC verification and were deemed secure.

Zhao said the amount of AI-generated code being committed is surging. "End-to-end coding agents are taking off right now," he explained. "Claude Code alone has over 15 million total commits on GitHub, accounting for more than 4 percent of all public commits.

"Partly that reflects more people using AI tools. But it's not only volume. The way people use these tools is changing. A year ago most developers used AI for autocomplete. Now people are vibe coding entire projects, shipping code they've barely read. That's a different risk profile." ®

More like these
×

Narrower topics

Broader topics

More about

More like these
×

Narrower topics

Broader topics

TIP US OFF

Send us news