Amazon security boss: AI makes pentesting 40% more efficient

The Register / 4/2/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical UsageIndustry & Market Moves

Key Points

  • Amazon’s security leadership claims that using AI for pentesting can make penetration testing about 40% more efficient.
  • The piece frames AI as an operational productivity tool for security teams, potentially reducing time and effort spent on testing activities.
  • It is positioned within a broader “Security” reporting context, suggesting growing industry momentum to apply AI to cybersecurity workflows.
  • An additional segment (“how to train your human AI”) indicates the article also touches on AI training concepts in an educational or playful way rather than focusing solely on pentesting metrics.

Amazon security boss: AI makes pentesting 40% more efficient

Plus: how to train your human AI

Wed 1 Apr 2026 // 20:00 UTC

interview Amazon has seen a 40 percent efficiency gain by using AI tools to pentest its products before and after launch, according to security chief CJ Moses.

"And I don't think we've hit the hockey stick of efficiency," Moses, the chief information security officer of Amazon Integrated Security, told The Register during an interview at the RSA Conference. "Every year we launch more things, every year the teams needed to be bigger to do the pentesting, and we were in a battle where we couldn't get enough pentesters to do all the work." 

Historically, this has been a very human- and resource-intensive endeavor, costing the cloud and online retail giant "millions and millions of dollars in humans" - both AWS employees and contractors - to proactively find and exploit bugs in products, services, and applications during the development process and before customers used them.

"With the advent of putting AI into play, we've actually become over 40 percent more efficient," Moses said, noting that this efficiency gain comes from human and operating expenses related to pentesting. 

Amazon isn't firing security staff and replacing them with robots, we're told. Instead, it's holding hiring flat while adding more cloud services, features, and lines of code, and also maintaining the same level of security, but at a much higher velocity, according to Moses.

Another benefit of AI pentesters, he noted, is that they can continually test for vulnerabilities, even after the products have been released. 

No longer is pentesting at a point in time. It continues to test, looking for next-level access, which is immeasurable

"The idea being that no longer is pentesting at a point in time," Moses said. "It's not even 365 days a year that you're getting one test. It continues to test, looking for next-level access, which is immeasurable from the standpoint of identifying issues, vulnerabilities, daisy chaining of potential vulnerabilities in an automated way, and then presents that as an alert to a human, for them to respond to and make a decision."

Plus, as former NSA cyber boss Rob Joyce told RSAC attendees during his panel, criminals are using AI to find flaws and misconfigurations in your organization, whether or not you are proactively doing this.

"You are going to be red-teamed whether you pay for it or not," Joyce said during a Monday panel. "The only difference is, you know who gets the results delivered to them."

And yes, humans are still very much in the loop. The AI performs the more mundane, data intensive tasks like vulnerability identification and analysis, and then hands off the decision making to a human, he explained.

"An example being that if a pentesting AI is pentesting an application, and it finds a vulnerability that will provide further access, you want the AI to ask a human whether it exploits that access," Moses said. "AI is very good at doing things, especially when you have large amounts of data and need that big view. But from a decision-making capability, it isn't something that we're ready to rely on."

According to Moses and his fellow chief information security officers and security firm CEOs, AI is about equal to a 7-year-old in its decision-making skills. "So if you're willing to let your 7-year-old make a decision as to whether they should jump to the next level of pentesting in your company, OK. But you may not want the AI doing that without someone much more experienced and older."

How to train your AI

CISO discussions during and around the annual cybersecurity conference often have to do with pain points facing company and government security leads, and this year, "it's an AI discussion," Moses said. More specifically, it's about how to secure AI systems and agents, he noted. 

"If you're used to securing humans, you're better able to secure AI," Moses said. "What are the two non-deterministic things that we must secure these days? Humans and AI. Look at your AI the way that you look at securing your humans. How do you secure humans? Training." 

Both need to be trained on what to do - or not to do - when someone calls the IT support desk claiming to be an employee locked out of a SaaS account. However, just like humans, AIs don't always behave in the same way despite security training, and that is why identity and access become vital.

Just as a human employee's key card and credentials should only allow them to access the physical spaces and IT environments needed to do their job, AI agents and systems should also be limited in what they can do. This means creating and managing agentic identities, training the underlying models with the right data to be able to complete the given tasks, and also restricting access to only the systems and data needed to perform specific tasks.

"You tell them what you want them to know, not anything more," Moses said. "If you tell them something that they don't need to know, they will act on it, they will use it, they will share it with their friends - and AI has friends." ®

More like these
×

Narrower topics

More about

More like these
×

Narrower topics

TIP US OFF

Send us news

Amazon security boss: AI makes pentesting 40% more efficient | AI Navigate