Sam Altman slammed Anthropic for restricting Mythos, now OpenAI follows same playbook with GPT Cyber
Sam Altman slammed Anthropic for keeping Mythos out of reach of most people. Now, he is doing the same with Cyber, OpenAI's counter AI system, which might be just as powerful, if not more, for very similar reasons.
by Divya Bhati · India TodayIn Short
- OpenAI is limiting GPT-5.5 Cyber access to vetted professionals
- The model can perform tasks like penetration testing, vulnerability detection, and malware analysis
- OpenAI is trying to restrict the rollout to prevent misuse
Just a few weeks ago, OpenAI CEO Sam Altman took a dig at rival Anthropic for restricting access to its cybersecurity model, Mythos, calling the move “fear-based marketing.” At the time, Altman also argued that by following such tactics, Anthropic was trying to keep powerful AI systems in the hands of a select few.
Cut to the present, and Altman appears to be following a very similar playbook with OpenAI’s GPT-5.5 Cyber, its answer to Mythos, making it available to select users that it is calling “critical cyber defenders.”
In a recent post on X, Altman announced that the company is rolling out GPT-5.5-Cyber, a new cybersecurity-focused AI model, to a limited group of users. “We’re starting rollout of GPT-5.5-Cyber, a frontier cybersecurity model, to critical cyber defenders in the next few days,” Altman wrote. He added that OpenAI plans to “work with the entire ecosystem and the government to figure out trusted access for cyber; we want to rapidly help secure companies and infrastructure.”
So, just like Anthropic did with Mythos, OpenAI is keeping its latest 'frontier' cyber model under lock and key. OpenAI is implementing a controlled rollout via its Trusted Access for Cyber (TAC) program, which limits access to vetted cybersecurity professionals and organisations. Prospective users will be required to complete a rigorous application process, providing credentials and specific use cases to ensure the model is used responsibly for defense against cyber-attacks.
What is GPT Cyber?
From what’s been shared so far, GPT-5.5-Cyber is designed as a powerful toolkit for cybersecurity work. It can assist with tasks like penetration testing, identifying and exploiting vulnerabilities, and reverse engineering malware. In practical terms, that means companies can use it to find weaknesses in their systems and fix them before attackers do.
But that same capability is also what makes it risky.
Cybersecurity tools like Cyber and Mythos are inherently dual-use. The same system that helps defenders identify vulnerabilities can also be used by bad actors to exploit them. That’s exactly why companies like Anthropic — and now OpenAI — are opting for restricted releases instead of making such models widely available from day one.
Interestingly, when Anthropic first introduced Mythos with limited access, Altman had been sharply critical of the approach. He even mocked the messaging around it, suggesting it made the model sound more dangerous and exclusive than necessary. “There are people in the world who have wanted to keep AI in the hands of a smaller group of people,” he said during a podcast. “You can justify that in a lot of different ways We have built a bomb we will sell you a bomb shelter for $100 million.”
Yet, the broader reality of deploying powerful AI tools now seems to be nudging even OpenAI in the same direction. Anthropic has partnered with more than 40 companies including Apple, Google and Microsoft for the initial rollout of Mythos under Project Glasswing. Reports suggest it is looking to expand and make Mythos available to 70 more companies, a move that is being closely monitored by the White House. OpenAI hasn’t made any company-specific announcements for Cyber access yet.
The decision by Anthropic — and now OpenAI — also reflects a wider shift across the AI industry. As models become more capable, especially in sensitive areas like cybersecurity, companies are increasingly favouring phased rollouts, external testing, and tighter access controls.
- Ends