Credit...Thea Traff for The New York Times
Trump Orders U.S. Agencies to Stop Using Anthropic AI Tech After Pentagon Standoff
The company had clashed with the military over how officials wanted to use its cutting-edge A.I. model. The order could vastly complicate intelligence analysis and defense work.
by https://www.nytimes.com/by/julian-e-barnes, https://www.nytimes.com/by/sheera-frenkel · NY TimesPresident Trump on Friday ordered all federal agencies to stop using artificial intelligence technology made by Anthropic, a directive that could vastly complicate government intelligence analysis and defense work.
Writing on Truth Social, Mr. Trump used harsh words for Anthropic, describing it as a “radical Left AI company run by people who have no idea what the real World is all about.”
Shortly after Mr. Trump’s announcement, and 13 minutes after a Pentagon deadline, Defense Secretary Pete Hegseth designated the company a “supply-chain risk to national security.” The label means that no contractor or supplier that works with the military can do business with Anthropic. Later on Friday, OpenAI, the maker of ChatGPT, said that it had reached an agreement with the Defense Department to provide its A.I. technology for classified systems.
The Anthropic designation was all but unheard-of, legal experts said. It stripped an American company of its government work by using a process previously deployed only with foreign companies the United States considered security risks.
Anthropic said in a statement on Friday that it would challenge the move in court.
“Designating Anthropic as a supply chain risk would be an unprecedented action — one historically reserved for U.S. adversaries, never before publicly applied to an American company,” the statement said, adding, “We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.”
The statement went on to say that the company had tried “in good faith” to reach an agreement with the Pentagon, and that Anthropic supported “all lawful uses of A.I. for national security,” aside from two exceptions.
“We held to our exceptions for two reasons,” the statement read. “First, we do not believe that today’s frontier A.I. models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America’s warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights.”
For days, Anthropic and the Pentagon had been locked in an escalating battle over how cutting-edge artificial intelligence technology would be used, and how it could aid military operations. The Pentagon demanded that Anthropic provide unfettered access to its A.I. system without the safeguards the company wanted.
Negotiations continued throughout Friday, but one person briefed on the talks said that there appeared to be little urgency from the Silicon Valley firm to reach a deal.
Mr. Trump’s statement, which came as the Pentagon and Anthropic were continuing to discuss a compromise, took Anthropic officials by surprise, according to people briefed on the discussions.
Calling the company “Leftwing nut jobs,” Mr. Trump said it had made a mistake trying to strong-arm the Pentagon.
Still, Mr. Trump announced a “Six Month phase out” for the Pentagon and some other agencies, which could allow for more extended negotiations between Anthropic and the Defense Department.
While some current and former American officials had expressed hope of some sort of deal before the Pentagon’s 5:01 p.m. deadline on Friday, Mr. Trump’s comments undoubtedly complicated matters.
Democratic lawmakers quickly rallied to Anthropic’s side. Senator Mark Warner of Virginia, the top Democrat on the Intelligence Committee, said Mr. Trump and Mr. Hegseth were trying to intimidate a leading American company, actions that posed a risk to defense readiness.
“The president’s directive to halt the use of a leading American A.I. company across the federal government, combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations,” Mr. Warner said.
Experts lamented the turn of events.
“This is a dark day in the history of American business,” said Dean Ball, a former White House A.I. adviser for the Trump administration who now works as a senior fellow at the Foundation for American Innovation.
“The message sent by the supply chain risk designation to businesses, investors and global partners could not be worse,” he added. “And on top of that, this is the most aggressive government regulation of A.I. ever taken anywhere in the world.”
Defense Department officials were already criticizing Anthropic’s leader after the company on Thursday rejected their latest offer to settle the dispute. On Thursday evening, Emil Michael, a top Pentagon official who oversees artificial intelligence, attacked Dario Amodei, the chief executive of Anthropic, who earlier in the day had released a statement about why the company would not agree to the Defense Department’s latest terms.
“It’s a shame that @DarioAmodei is a liar and has a God-complex,” Mr. Michael wrote. “He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company.”
On the surface, the battle between the Pentagon and Anthropic is a contract dispute over technical details of how the artificial intelligence model works, and over the military’s use of it. But as Mr. Trump’s comments showed, it has also ballooned into a political fight.
The Pentagon wants all its contractors to adhere to a single standard — that the military can use what it buys however it wants, as long as it complies with the law. But Pentagon officials have also been happy to beat up on tech companies, particularly ones the Trump administration has branded as “woke.”
For Anthropic, a firm that prioritizes both national security and technological safety, the political stakes are high.
Employees at the company have cheered their chief executive’s firm stance. And in a rare moment of unity across Silicon Valley A.I. companies, employees at OpenAI and Google, both Anthropic competitors, signed letters backing Anthropic’s position.
One letter published Thursday was signed by nearly 50 employees at OpenAI and 175 at Google. It criticized the Pentagon’s negotiating tactics and called on its leaders to “put aside their differences and stand together to continue to refuse the Department of War’s current demands.”
“They’re trying to divide each company with fear that the other will give in,” the letter said.
In their initial potential compromise, the Pentagon said on Thursday that it had no interest in using Anthropic’s model that works on classified systems for either mass surveillance or fully autonomous weaponry. But in rejecting that offer, Anthropic said the Pentagon’s assertion that it would not use the model, called Claude, for those purposes was undercut by the legal language in the contract.
“In a narrow set of cases, we believe A.I. can undermine, rather than defend, democratic values,” Mr. Amodei wrote. “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”
The Pentagon had weighed forcing Anthropic to let it use Claude through the Defense Production Act, a move that would force Anthropic to work with the government by labeling it as critical to national security. But Mr. Trump’s social media post made clear that the government intended to move on from Claude. So the Pentagon chose to announce that the company was a supply chain risk.
While Mr. Hegseth also announced a six-month transition period, he ended his post by saying that his decision was final.
But experts said that the Pentagon’s use of a tool meant for foreign companies on American firms posed its own set of complications.
“The problem with using the designation of a supply chain risk is that it waters down that tool,” said Jessica Tillipman, a government contracts and A.I. expert at George Washington University’s law school. The Defense Department would be “transforming what is designed to be a national security tool into a point of leverage for a business use,” she added.
While many of the uses of artificial intelligence to assist military operations on the ground are still in a developmental stage, the models are actively used for intelligence analysis. Forcing Claude off government computers would hurt analysts at the National Security Agency sifting through overseas communications intercepts. It could also hamper C.I.A. analysts searching for patterns in intelligence reports.
Former officials have said C.I.A. officials are anxious to find a way to continue to use Claude, which has sped up their work and deepened their analysis. But before Mr. Trump’s comments, officials had warned that any order by the president could force the agency to find other solutions.
The Pentagon is ready to move forward with Grok, produced by Elon Musk’s xAI, on its classified system. But Grok is considered by current and former government officials to be an inferior product. And switching A.I. software would take time and almost certainly cause disruption.
Tripp Mickle, Kate Conger and Cade Metz contributed reporting from San Francisco.