Pentagon tech chief says Anthropic is still blacklisted, but Mythos is a separate issue
by Ashley Capoot · CNBCKey Points
- Defense Department CTO Emil Michael told CNBC Anthropic is still a supply chain risk, but that its Mythos model is a "separate national security moment."
- The agency announced it has entered into agreements with seven other AI companies that will deploy their technology across the agency's classified networks for "lawful operational use."
- Michael's comments come after a heated clash between the DOD and Anthropic spilled into public view earlier this year.
Department of Defense CTO Emil Michael on Friday said Anthropic is still a supply chain risk, but that Mythos, the company's artificial intelligence model with advanced cyber capabilities, is a "separate national security moment."
"I think the Mythos issue that's being dealt with government-wide, not just at Department War, is a separate national security moment where we have to make sure that our networks are hardened up, because that model has capabilities that are particular to finding cyber vulnerabilities and patching them," Michael told CNBC's "Squawk Box" on Friday.
Michael's comments come after a heated clash between the DOD and Anthropic spilled into public view earlier this year. The DOD declared Anthropic a supply chain risk, which means its technology purportedly threatens U.S. national security, after the two sides failed to agree on how Anthropic's models could be used by the agency.
Because of the supply chain risk designation, defense contractors have to certify that they do not use Anthropic's Claude models in their work with the military. Anthropic sued the Trump administration in March to try to reverse the Pentagon's blacklisting.
It is not clear how the DOD could use Anthropic's Mythos model without violating the supply chain risk designation.
Michael said Friday the DOD still wants guardrails, and that those "are negotiable based on what they are with all the companies, and they have different views on that."
Read more CNBC tech news
- Apple CEO Cook warns of extended memory crunch: 'We'll look at a range of options'
- Japan Airlines to trial humanoid robots at Tokyo's Haneda airport to load baggage
- Tech stocks post best month since start of Covid pandemic in 2020
- Intel's stock more than doubles in April for best month in chipmaker's 55 years on Nasdaq
On Friday, the DOD announced it has entered into agreements with seven AI companies that will deploy their technology across the agency's classified networks for "lawful operational use." Those companies include Google, OpenAI, Nvidia, Microsoft, Amazon Web Services, SpaceX, which merged with Elon Musk's xAI, and Reflection, a startup developing open weight models.
OpenAI announced it struck a deal with the Pentagon hours after Defense Secretary Pete Hegseth declared Anthropic a supply chain risk in late February. OpenAI CEO Sam Altman later conceded that the timing "looked opportunistic and sloppy," in a post on X.
Michael's comments on Friday show that Mythos has complicated the DOD's efforts to cast Anthropic aside.
Anthropic's CEO Dario Amodei met with senior Trump administration officials at the White House earlier this month to discuss the model, which both sides characterized as a "productive" discussion.
Following the meeting, President Donald Trump told CNBC that "it's possible" there will be a deal between Anthropic and the DOD. Trump said the company is "very smart" and could "be of great use."
Despite the supply chain risk designation, the DOD has been using Anthropic's models to support its military efforts in the war in Iran. The National Security Agency, which is under the DOD, is reportedly using Mythos, according to Axios.
"From a national security standpoint, you always have to look at those things," Michael said Friday. "NSA and Commerce evaluates all frontier models, including Chinese frontier models, to see what the capabilities are at the edge."
Anthropic's lawsuits against the Trump administration in San Francisco and Washington, D.C., are still ongoing.