Sam Altman, chief executive officer of OpenAI Inc., at the AI Impact Summit in New Delhi, India, on Thursday, Feb. 19, 2026.
Prakash Singh | Bloomberg | Getty Images
OpenAI CEO Sam Altman said late Friday that his company has agreed to terms with the Department of Defense on use of its artificial intelligence models shortly after President Donald Trump said the government won’t work with AI rival Anthropic.
“Tonight, we reached an agreement with the Department of War to deploy our models in their classified network,” Altman wrote in a post on X. “In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.”
Altman’s post lands at the end of a dramatic day for the AI industry, which has found itself at the center of a political debate surrounding how its models can be used. Earlier in the day, Defense Secretary Pete Hegseth designated Anthropic a “Supply-Chain Risk to National Security” after weeks of tense negotiations. The label is typically reserved for foreign adversaries, and it would force DoD vendors and contractors to certify that they don’t use Anthropic’s models.
President Trump also directed every federal agency in the U.S. to “immediately cease” all use Anthropic’s technology.
Anthropic was the first lab to deploy its models across the DoD’s classified network, and had been trying to negotiate the ongoing terms of its contract with the agency before talks collapsed. The company wanted assurance that its models would not be used for fully autonomous weapons or mass surveillance of Americans, while the DoD wanted Anthropic to agree to let the military use the models across all lawful use cases.

Altman told employees in a Thursday memo that OpenAI shared the same “red lines” as Anthropic. He said in his post Friday that the DoD agreed to its restrictions.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
It’s not immediately clear why the DoD agreed to accommodate OpenAI and not Anthropic, though government officials have for months criticized Anthropic for allegedly being overly concerned with AI safety.
Altman said OpenAI will build “technical safeguards to ensure its models behave as they should,” and that the company will deploy personnel to “help with our models and to ensure their safety.”
“We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept,” Altman wrote. “We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements.”
Anthropic said in a statement Friday that it was “deeply saddened” by the Pentagon’s decision to label the company a supply chain risk. It said it intends to challenge that designation in court.
WATCH: Hegseth directs Pentagon to designate Anthropic a supply-chain risk


