Skip to main content

Already a subscriber? Make sure to log into your account before viewing this content. You can access your account by hitting the “login” button on the top right corner. Still unable to see the content after signing in? Make sure your card on file is up-to-date.

The Trump administration has defended its decision to blacklist AI company Anthropic, arguing that the move was a lawful national security measure rather than a violation of the company’s free speech rights.

Some shit you should know before you dig in: Long story short, the Pentagon and Anthropic (a San Francisco-based AI company that has contracts with the Defense Department) are at odds after the US military used Anthropic’s Claude AI model to assist in the capture of former Venezuelan President Nicolás Maduro. Reports say the model was integrated into classified mission workflows to support the raid. After learning it was used in the operation, Anthropic pressed Palantir (its partner) and the Pentagon on whether the deployment crossed its red lines on mass surveillance or fully autonomous weapons. Defense officials viewed those concerns as resistance to lawful military use, leading the Pentagon to seek an adjustment in the contract that would allow Claude to be used for “all lawful purposes,” which Anthropic has taken issue with, arguing the language is too broad. Anthropic has also raised concerns about the US government using its AI to conduct “mass surveillance” or be used in autonomous weapons. To the contrary, the Pentagon said it had no interest in doing this and ultimately announced that it would stop using Claude and put the company on a national security blacklist. This led to Anthropic filing a lawsuit to remove the blacklist, which brings us to where we’re at now.

What’s going on now: In a notable development, the Trump administration filed a formal response in federal court defending the Pentagon’s decision and urging the judge to deny Anthropic’s request to block the blacklist. In that filing, government lawyers argue the dispute is rooted in contract negotiations and national security concerns, not free speech, and say Anthropic is unlikely to succeed on its First Amendment claims. They stress that the issue arose only after the company refused to remove restrictions on how its AI could be used.

The government also argues that federal agencies have the authority to choose their contractors and are not obligated to accept terms imposed by private companies. Officials say that Anthropic was attempting to dictate how the military could use its technology, which they say is not permissible, especially in a defense context.

Despite this, Anthropic is countering that the designation is “unprecedented and unlawful,” arguing in its lawsuits that the government overstepped its authority and violated its free speech and due process rights. The company maintains that it is being punished for its stance on limiting uses like mass surveillance and autonomous weapons.

JOIN THE MOVEMENT

Keep up to date with our latest videos, news and content