OpenAI locks GPT-5.5-Cyber behind velvet rope despite slamming Anthropic for doing exactly that

The Register / 5/1/2026

📰 NewsSignals & Early TrendsIndustry & Market Moves

Key Points

  • OpenAI has restricted access to its GPT-5.5-Cyber model, keeping it behind a “velvet rope” rather than making it widely available.
  • The move contrasts with OpenAI’s earlier criticism of Anthropic for adopting a similar access-gating approach, which the article frames as hypocrisy.
  • The article characterizes the situation as Altman’s team repeating the same kind of gatekeeping it previously mocked.
  • By limiting availability for a security-focused cyber model, OpenAI is signaling tighter control over higher-risk AI capabilities and who can access them.
  • The coverage highlights ongoing tension in the AI industry between open availability and controlled distribution of powerful or sensitive models.

OpenAI locks GPT-5.5-Cyber behind velvet rope despite slamming Anthropic for doing exactly that

Altman's crew now doing the same gatekeeping it recently mocked

Fri 1 May 2026 // 11:42 UTC

OpenAI is lining up a limited release of its new GPT-5.5-Cyber model to a handpicked circle of "cyber defenders," just weeks after taking a swipe at Anthropic for doing almost exactly the same thing.

CEO Sam Altman said in a post on X that the rollout will begin "in the next few days," with access restricted to a group he described as trusted defenders working to secure critical systems. 

"We will work with the entire ecosystem and the government to figure out trusted access for cyber," he wrote, adding that the goal is to "rapidly help secure companies and infrastructure."

GPT-5.5-Cyber is built to spot flaws before anyone else abuses them. OpenAI says it can pentest, find bugs, exploit them, and tear apart malware, but as we have already seen, tools that break systems rarely stay in the right hands for long.

OpenAI's announcement comes just weeks after Anthropic rolled out its own cyber-focused model, Claude Mythos, to roughly 50 organizations under tight controls, saying it would never be made publicly available – and Altman was not impressed. 

As reported by TechCrunch, he took aim at what he framed as exclusivity dressed up as caution during an appearance on the Core Memory podcast. 

"There are people in the world who, for a long time, have wanted to keep AI in the hands of a smaller group of people," he said. "You can justify that in a lot of different ways." He went further, likening the approach to selling fear. "We have built a bomb, we are about to drop it on your head. We will sell you a bomb shelter for $100 million."

Now OpenAI is, if not building the same shelter, at least checking IDs at the door.

Independent testing suggests the model is not just marketing fluff. The UK's AI Security Institute said this week that GPT-5.5-Cyber is "one of the strongest models we have tested on our cyber tasks," and noted it is only the second system to complete one of its multi-step attack simulations end to end. 

It may be pitched as protection, but when the tools can both break and fix systems, the difference often comes down to who gets there first. ®

More about

More like these

More about

More about

More like these

TIP US OFF

Send us news