OpenAI locks GPT-5.5-Cyber behind velvet rope despite slamming Anthropic for doing exactly that
Altman's crew now doing the same gatekeeping it recently mocked
by Carly Page · The RegisterOpenAI is lining up a limited release of its new GPT-5.5-Cyber model to a handpicked circle of "cyber defenders," just weeks after taking a swipe at Anthropic for doing almost exactly the same thing.
CEO Sam Altman said in a post on X that the rollout will begin "in the next few days," with access restricted to a group he described as trusted defenders working to secure critical systems.
"We will work with the entire ecosystem and the government to figure out trusted access for cyber," he wrote, adding that the goal is to "rapidly help secure companies and infrastructure."
GPT-5.5-Cyber is built to spot flaws before anyone else abuses them. OpenAI says it can pentest, find bugs, exploit them, and tear apart malware, but as we have already seen, tools that break systems rarely stay in the right hands for long.
OpenAI's announcement comes just weeks after Anthropic rolled out its own cyber-focused model, Claude Mythos, to roughly 50 organizations under tight controls, saying it would never be made publicly available – and Altman was not impressed.
As reported by TechCrunch, he took aim at what he framed as exclusivity dressed up as caution during an appearance on the Core Memory podcast.
"There are people in the world who, for a long time, have wanted to keep AI in the hands of a smaller group of people," he said. "You can justify that in a lot of different ways." He went further, likening the approach to selling fear. "We have built a bomb, we are about to drop it on your head. We will sell you a bomb shelter for $100 million."
Now OpenAI is, if not building the same shelter, at least checking IDs at the door.
Independent testing suggests the model is not just marketing fluff. The UK's AI Security Institute said this week that GPT-5.5-Cyber is "one of the strongest models we have tested on our cyber tasks," and noted it is only the second system to complete one of its multi-step attack simulations end to end.
It may be pitched as protection, but when the tools can both break and fix systems, the difference often comes down to who gets there first. ®