OpenAI CEO Sam Altman announced Friday that his company has reached an “agreement” with the Department of War to deploy its models within classified government networks, framing the deal as aligned with core safety principles. In a statement, Altman emphasized mutual commitments to prohibit domestic mass surveillance and human responsibility for weapon use, noting both parties share these values in policy and practice. The agreement includes technical safeguards such as deploying AI guardrails exclusively on cloud networks and implementing “FDEs” to ensure model safety during deployment. Altman reiterated that OpenAI urges all AI companies to adopt similar terms under a broader goal of de-escalating legal tensions toward “reasonable agreements.”
The announcement follows President Trump’s directive for federal agencies—including the Department of War—to halt Anthropic technology use, escalating a months-long standoff over military AI applications. Trump previously mandated a six-month transition period for such systems, warning that Anthropic would face “major civil and criminal consequences” if it failed to comply during this phaseout. Secretary of War Pete Hegseth subsequently labeled Anthropic a “supply-chain risk to National Security,” instructing the Department of War to cease all commercial activity with the company effective immediately while allowing a six-month window for transition services.
Anthropic CEO Dario Amodei had previously rejected DoW demands to permit AI use in “all lawful purposes,” citing risks of mass domestic surveillance and autonomous weapons systems. In his statement, Amodei stressed that Anthropic has never opposed specific military operations but argued two narrow use cases—unrelated to current technology capabilities—should remain excluded from contracts. OpenAI’s national security team later confirmed the relationship with the government had deteriorated due to Amodei’s public posts, which reportedly “got the department upset.”
Anthropic, founded by former OpenAI members who prioritized safety concerns, was the only major commercial AI firm approved for Pentagon deployment through a Palantir partnership. The dispute centers on identical limitations Anthropic proposed for DoW use—a stance Altman described as more robust than prior classified AI agreements.