US court expedites Anthropic's legal battle with Department of Defense
The ruling stems from the Pentagon designating Anthropic, creator of the Claude AI model, as a national security supply chain risk - a label typically reserved for organisations from unfriendly foreign countries.
· CNA · JoinRead a summary of this article on FAST.
Get bite-sized news via a new
cards interface. Give it a try.
Click here to return to FAST Tap here to return to FAST
FAST
WASHINGTON: A US appeals court on Wednesday (Apr 8) denied Anthropic's request to put on hold a move by the Pentagon to label it a supply chain risk, but ordered the AI startup's legal battle with the Department of Defense to be put on a fast track.
"On one side is relatively contained risk of financial harm to a single private company," the three-member appellate panel here reasoned.
"On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict."
The ruling stems from the Pentagon designating Anthropic, creator of the Claude AI model, as a national security supply chain risk - a label typically reserved for organisations from unfriendly foreign countries.
CNA Games
Guess Word
Crack the word, one row at a time
Buzzword
Create words using the given letters
Mini Sudoku
Tiny puzzle, mighty brain teaser
Mini Crossword
Small grid, big challenge
Word Search
Spot as many words as you can
Show More
Show Less
The AI startup sought a stay of the action in the appellate court and also sued the Department of Defense in federal court in Northern California.
The appellate panel stated in its ruling that requiring the Department of Defense to prolong its use of Anthropic AI directly or through contractors "strikes us as a substantial judicial imposition on military operations".
However, the appeals court agreed that Anthropic raised "substantial challenges" to the sanctions and ordered that proceedings in the underlying case be expedited.
"We're grateful the court recognised these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply chain designations were unlawful," an Anthropic spokesperson told AFP.
"While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI."
In the suit filed in San Francisco, federal Judge Rita Lin temporarily froze the sanctions, reasoning that President Donald Trump's administration likely violated the law in blacklisting the AI powerhouse for expressing unease about the Pentagon's use of its technology.
In her ruling, she said the government's designation of Anthropic as a supply chain risk was "likely both contrary to law and arbitrary and capricious".
The dispute erupted in February after Anthropic infuriated Pentagon chief Pete Hegseth by insisting its technology should not be used for mass surveillance or fully autonomous weapons systems.
The tech sector has largely supported Anthropic in the wake of the punitive measures.
Newsletter
Week in Review
Subscribe to our Chief Editor’s Week in Review
Our chief editor shares analysis and picks of the week's biggest news every Saturday.
Sign up for our newsletters
Get our pick of top stories and thought-provoking articles in your inbox
Get the CNA app
Stay updated with notifications for breaking news and our best stories
Get WhatsApp alerts
Join our channel for the top reads for the day on your preferred chat app