In Washington’s new AI battlefield, one company got a contract… and another got cut off.
In a dramatic pivot in the US government’s embrace of AI, OpenAI has sealed a new agreement with the Pentagon to deploy its AI models on the Department of Defense’s classified networks under specific safety guardrails, just hours after the administration moved to bar rival Anthropic from federal use.
President Donald Trump directed all federal agencies to stop using Anthropic’s technology, citing national security concerns after months of tense negotiations over military access and ethical safeguards — a standoff that ultimately left Anthropic on the outside while OpenAI secured its position as a key defense partner.
OpenAI steps into the defense spotlight
Under the new arrangement, OpenAI will provide its advanced AI models for use on the Pentagon’s classified systems, allowing defense teams to build secure applications for logistics, intelligence analysis, cybersecurity, and operational planning.
Unlike the breakdown with Anthropic, OpenAI agreed to a framework that permits lawful military use while maintaining defined safety guardrails and usage policies.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” Altman wrote on X. “We also will build technical safeguards to ensure our models behave as they should, which the (Department of War) also wanted.”
The deal builds on OpenAI’s earlier $200 million Department of Defense contract awarded in 2025 alongside Anthropic, xAI, and Google. While Anthropic ultimately resisted expanded access tied to domestic surveillance and autonomous weapons concerns, OpenAI negotiated terms that satisfied federal requirements without publicly breaking from its stated safety principles.
With Anthropic sidelined, OpenAI now stands as one of the primary AI providers embedded within US defense infrastructure — a shift that could reshape how frontier AI models are integrated into national security operations.
The bone of contention with Anthropic
In a Feb. 26 statement published by the company, Anthropic CEO Dario Amodei spelled out the two demands they’ve refused to yield to:
- The first addresses concerns over the use of AI for domestic surveillance. In the publication, Amodei noted that although they support the use of AI for foreign intelligence and counterintelligence purposes, the same tool for domestic surveillance is incompatible with democratic values.
- The reliability of AI in enabling fully autonomous AI warfare. Amodei argues that AI currently remains too unreliable for safe, fully autonomous deployment. For this reason, Anthropic refuses to risk American lives by granting autonomous power to systems requiring human involvement. The company’s offer of R&D for autonomous warfare reliability, however, was turned down by the Pentagon.
In his Truth Social post, President Trump said that Anthropic had made a “disastrous mistake trying to strong-arm the Department of War.” While announcing the contract cutoff, he stressed that the US government would not do business with Anthropic again.
Confirming President Trump’s announcement, Hegseth also designated Anthropic a Supply-Chain Risk to National Security via a post on X:
“In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
Hegseth added: “Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.”
Anthropic condemned its designation as a Supply-Chain Risk to National Security. According to a response publication it made on Feb. 27, Anthropic called the designation “legally unsound.” The company, through the publication, also noted that they’d challenge the designation in court.
While they still expressed interest in continuing to support matters of US national security, they’ve remained resolute in their stance.
For more on Washington’s AI policy shifts and how Big Tech is responding to energy cost concerns tied to AI data centers, check out TechRepublic’s coverage of Trump’s new Ratepayer Protection Pledge.

