Pentagon Designates Anthropic Supply Chain Risk Over AI Military Conflict

Anthropic on Friday responded after US Defense Secretary Pete Hegseth ordered the Pentagon to designate Artificial Intelligence (AI) as a “supply-side” risk.
“This action follows months of discussions that have reached a cross over two areas we have requested in the legalization of our AI model, Claude: mass surveillance of Americans and autonomous weapons,” the company said.
“No amount of intimidation or punishment from the Department of Defense is going to change our stance on mass surveillance at home or with private weapons.”
In a statement posted on Truth Social, US President Donald Trump said he is ordering all government agencies to stop using Anthropic technology in the next six months. A subsequent X post from Hegseth mandated that all contractors, suppliers, and partners doing business with the US military cease any “commercial activity with Anthropic” effective immediately.
“In conjunction with the President’s order that the Federal Government stop using Anthropic technology, I am directing the Department of the Army to designate Anthropic Risk a Supply Chain Risk to National Security,” Hegseth wrote.
The appointment comes after weeks of talks between the Pentagon and Anthropic about the use of its AI models by the US military. In a post published this week, the company said its contracts should not support mass surveillance at home or the development of autonomous weapons.
“We support the use of AI in foreign intelligence and counterintelligence operations,” notes Anthropic. “But using these systems to conduct mass surveillance at home is inconsistent with democratic principles. AI-driven mass surveillance poses serious risks to our basic liberties.”
The company also called the position of the US Department of War (DoW) that it will only work with AI companies that allow “any legitimate use” of the technology, while removing any potential safeguards, as part of efforts to build an “AI-first” military and strengthen national security.
“Diversity, Equity, Inclusion and social considerations have no place in DoW, so we must not use AI models that include ‘editing’ of ideas that interfere with their ability to provide authentic answers to user requests,” a memorandum released by the Pentagon last month reads.
“The Department must also implement models that do not have policy barriers that may limit legitimate military deployments.”
In response to the announcement, Anthropic described it as “legally absurd” and said it would set a dangerous precedent for any American company negotiating with the government. It also noted that the supply chain risk designation under 10 USC 3252 would only extend to Claude’s use as part of DoW contracts, and that it would not affect Claude’s use to serve other customers.
Hundreds of employees at Google and OpenAI signed an open letter urging their companies to side with Anthropic in its conflict with the Pentagon over military applications of AI tools like Claude.
The disagreement between Anthropic and the US government comes as OpenAI CEO Sam Altman said OpenAI has reached an agreement with the US Department of Defense (DoD) to include its models in their classified network. It also asked the DoD to extend those terms to all AI companies.
“AI security and the broad distribution of benefits are at the heart of our work. Two of our most important security principles are the prevention of mass surveillance at home and the human responsibility for the use of force, including autonomous weapons systems,” Altman said in a post written on X. “The DoW agrees with these principles, they reflect them in law and policy, and we include them in our agreement.”



