
Anthropic Launches Legal Challenge to Block Pentagon Blacklisting Amid AI Use Restrictions and National Security Dispute
The AI lab argues the designation is unlawful, risking billions in revenue and raising constitutional questions over government control of technology.
Anthropic filed a lawsuit seeking to prevent the Pentagon from placing it on a national security blacklist, intensifying the artificial intelligence lab’s high-stakes dispute with the US military over restrictions on its technology.
In its filing with a federal court in California, Anthropic argued that the designation was unlawful and infringed its rights to free speech and due process. The company requested the judge to overturn the designation and block federal agencies from enforcing it.
“These actions are unprecedented and unlawful. The Constitution does not permit the government to use its enormous power to punish a company for protected speech,” Anthropic said.
Last Thursday, the Pentagon issued a formal supply-chain risk designation against Anthropic, limiting the use of technology reportedly employed in military operations in Iran. Defence Secretary Pete Hegseth acted after the start-up refused to remove safeguards preventing the use of its AI for autonomous weapons or domestic surveillance.
The two parties had engaged in increasingly tense discussions over these restrictions for months. Former President Trump, in a social media post, instructed the entire government to stop using Anthropic’s AI, Claude. Axios reported on Monday that the White House is preparing an executive order formally instructing federal agencies to remove Claude from operations.
The dispute is seen as a test of the administration’s authority over business and whether the government or AI developers ultimately control the technology’s usage.
Notably, Anthropic had actively sought engagement with the U.S. national security apparatus earlier than many other AI companies. CEO Dario Amodei has stated he is not opposed to AI-driven weapons but believes current AI technology is insufficiently accurate.
Anthropic stressed that the lawsuit does not prevent renewed negotiations with the U.S. government and potential settlement, emphasising that it does not wish to be in conflict with the authorities. The Pentagon declined to comment on the litigation.
The designation poses a significant threat to Anthropic’s government business and could influence how other AI companies negotiate restrictions on military applications. Amodei clarified, however, that the designation has “a narrow scope,” and businesses may continue using Claude for non-Pentagon projects.
Wedbush analyst Dan Ives warned that some enterprises may pause deployments of Claude until the court case is resolved. Anthropic executives noted in court filings that the blacklisting could reduce the company’s 2026 revenue by billions and harm its reputation.
Finance chief Krishna Rao said the effects of the designation would be “almost impossible to reverse,” while Chief Commercial Officer Paul Smith highlighted losses from a partner switching to a competitor and disrupted negotiations worth roughly $180 million.
Anthropic and partners maintain that the Pentagon designation applies only to contracts involving the Department of Defence, though Trump’s directive targeted all federal use. A second lawsuit alleges that a broader supply-chain risk designation could lead to blacklisting across civilian agencies, pending an interagency review.
A group of 37 engineers from OpenAI and Google submitted an amicus brief in support of Anthropic, warning that government action could stifle open debate on AI risks and benefits and hinder innovation.
The Pentagon maintains that US law dictates national defence and insists on flexibility to use AI for “any lawful purpose,” arguing that Anthropic’s restrictions could endanger lives. Anthropic countered that AI is not yet reliable enough for autonomous weapons and that domestic surveillance would violate fundamental rights.
Following the designation, Anthropic vowed to challenge it in court, calling it legally unsound and a dangerous precedent. Amodei also apologised for an internal memo previously published by The Information, which criticised Pentagon officials’ views on the company.
The Department of Defence has signed contracts worth up to $200 million with major AI labs, including Anthropic, OpenAI, and Google, over the past year. Shortly after Anthropic’s blacklisting, OpenAI announced a deal to integrate its technology within Defence Department networks, with CEO Sam Altman affirming adherence to human oversight principles and opposition to mass surveillance.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.