Anthropic has filed a lawsuit to prevent the Pentagon from adding the company it a national security blocklist. This comes days after the Department of Defense sent a letter to Anthropic confirming the company was labeled a supply chain risk; at the time CEO Dario Amodei had all but guaranteed Anthropic would fight back with legal action.
The lawsuit claims the designation is unlawful and violated free speech and due process rights. “These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” Anthropic said in a statement published by Reuters.
Engadget received the following statement from an Anthropic spokesperson:
“Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners. We will continue to pursue every path toward resolution, including dialogue with the government.”
The lawsuit characterizes the government’s actions as an “unprecedented and unlawful […] campaign of retaliation.” It goes on to say that “the Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here.”
Today’s legal action comes after several weeks of back-and-forth between the AI company and the government. In late February, news broke that the Department of Defense and Defense Secretary Pete Hegseth were pressuring Anthropic to remove certain safeguards from its AI systems, but Amodei made it clear the company would refuse to allow its model to be used for mass surveillance or development of autonomous weapons.
On the February 27 deadline, Amodei refused to budge, leading Hegseth to threaten the company with the supply chain risk designation; he also said the US government would cancel its $200 million contract with the company. The same day, President Trump ordered all federal agencies to cease using Anthropic as well. Despite all this, according to the lawsuit, Anthropic had agreed to “collaborate with the Department on an orderly transition to another AI provider willing to meet its demands.”
Anthropic rival OpenAI stepped into this chaos and quickly made a deal with the Department of Defense. At the time, OpenAI CEO Sam Altman said that two of OpenAI’s most important safety principles are “prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems” — the same issues that got Anthropic in hot water. OpenAI then doubled down on the surveillance issue, writing into its contract that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”
Depsite this, OpenAI’s head of robotics hardware resigned from the company this weekend in response to the Defense Department deal. Caitlin Kalinowski wrote on X that “surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”

