
A growing conflict between the U.S. Defense Department and Anthropic has intensified after the company was formally designated a “supply chain risk to national security” by Defense Secretary Pete Hegseth — a move that could jeopardize key federal partnerships.
The standoff stems from Anthropic’s refusal to participate in certain military AI initiatives proposed by the Pentagon. The dispute shows little sign of easing, as the Trump administration has reportedly directed federal agencies to halt use of Anthropic’s AI models. At the same time, OpenAI announced a partnership with the Department of Defense, deepening the contrast between the two AI firms’ approaches to military collaboration.
Why the Dispute Began
At the heart of the disagreement are two major concerns raised by Anthropic:
Mass surveillance of U.S. citizens
Deployment of fully autonomous lethal weapons
Anthropic’s CEO, Dario Amodei, has argued that current AI systems are not sufficiently reliable to control autonomous weapons without significant risks. He also raised alarms about privacy and the absence of clear regulatory guardrails governing large-scale surveillance programs.
The company has maintained that its AI safety principles prevent it from participating in projects that could undermine civil liberties or enable unchecked autonomous warfare.
The “Supply Chain Risk” Designation
Labeling Anthropic a “supply chain risk” is an unusually severe action. Historically, such designations have been used against foreign firms viewed as national security threats — not U.S.-based technology companies.
In a post on X, Hegseth stated that no contractor, supplier, or partner working with the U.S. military may engage in commercial activity with Anthropic. The classification could carry financial consequences and disrupt business relationships tied to defense contracts.
Anthropic responded forcefully, describing the move as “legally unsound” and warning it sets a “dangerous precedent for any American company that negotiates with the government.” The company contends that the Defense Secretary lacks the authority to broadly block contractors from using its AI systems outside of specific military-funded projects.
What It Means for Customers
The designation does not amount to a blanket ban across all sectors. According to Anthropic, the restriction applies only to work directly connected to Defense Department projects.
Individual customers and private companies can continue using Anthropic’s AI models without restriction.
Defense contractors, however, may face limitations when using Anthropic’s Claude AI system for military-funded assignments.
Despite mounting pressure, Anthropic has reiterated that it will not compromise its AI safety standards. The clash underscores a broader debate within the tech industry over how artificial intelligence should — or should not — be integrated into national defense systems.