Already a subscriber? Make sure to log into your account before viewing this content. You can access your account by hitting the “login” button on the top right corner. Still unable to see the content after signing in? Make sure your card on file is up-to-date.
A major AI company that has been entangled in a growing dispute with the US government over company-imposed restrictions on military uses of its artificial intelligence has filed lawsuits challenging a Pentagon decision to blacklist it.
Some shit you should know before you dig in: Long story short, the Pentagon and Anthropic (a San Francisco-based AI company that has contracts with the Defense Department) are at odds after the US military used Anthropic’s Claude AI model to assist in the capture of former Venezuelan President Nicolás Maduro. Reports say the model was integrated into classified mission workflows to support the raid. After learning it was used in the operation, Anthropic pressed Palantir (its partner) and the Pentagon on whether the deployment crossed its red lines on mass surveillance or fully autonomous weapons. Defense officials viewed those concerns as resistance to lawful military use, leading the Pentagon to seek an adjustment in the contract that would allow Claude to be used for “all lawful purposes,” which Anthropic has taken issue with , arguing the language is too broad. Anthropic has also raised concerns about the US government using its AI to conduct “mass surveillance” or to be used in autonomous weapons. To the contrary, the Pentagon said it had no interest in doing this and ultimately announced that it would stop using Claude and put the company on a national security blacklist.
What’s going on now: In a notable development, Anthropic has filed two federal lawsuits challenging the Pentagon’sdecision to label the company a “supply chain risk,” a designation that effectively blocks defense contractors from using its AI systems in work for the Department of Defense. The lawsuits, filed in the US District Court for the Northern District of California and the US Court of Appeals for the District of Columbia Circuit, argue that the government’sactions are unlawful and were carried out in retaliation for Anthropic’s stance on the limits of artificial intelligence in warfare and surveillance. The company is asking the courts to overturn the designation and prevent federal agencies from enforcing it.
In its filings, Anthropic claims the government violated its constitutional rights, including its First Amendment right to express views about the safety and appropriate use of its own technology. The company argues that the federal government is improperly using its authority to punish Anthropic for maintaining safeguards on how its AI system, Claude, can be deployed. According to the lawsuit, the Constitution does not allow the government to wield its power to retaliate against a company for its protected speech or for adhering to publicly stated safety principles about emerging technology.
Anthropic’s lawyers also argue that the Pentagon misused the “supply chain risk” designation, which was originally designed to address threats posed by foreign adversaries that might compromise US national security systems. In the complaint, the company says the law is narrow and does not apply to an American firm engaged in a policy dispute with the government.
The Pentagon, however, has defended its actions by arguing that a private technology company cannot dictate how the US military uses tools in national security operations. Defense officials have said the government must retain full flexibility to use AI systems for any lawful purpose and warned that restrictions imposed by a private firm could hinder military operations or endanger American lives.
More to come.






