
A U.S. appeals court has refused to halt the Pentagon’s controversial blacklisting of AI company Anthropic, intensifying a high-stakes legal battle that could reshape how artificial intelligence firms work with the military.
The decision by the U.S. Court of Appeals for the District of Columbia Circuit delivers an early win for the Trump administration, while exposing a growing split in the judiciary after a California court previously sided with the AI firm.
Court Rejects Emergency Block Request
On Wednesday, a three-judge panel declined Anthropic’s request to temporarily pause its designation as a national security supply-chain risk. The ruling allows the Pentagon’s restrictions to remain in place while the broader lawsuit continues.
Importantly, the decision is not final—it only addresses whether the policy should be blocked during litigation.
What the Blacklisting Means
The designation, issued by Defense Secretary Pete Hegseth, effectively prevents Anthropic from securing Pentagon contracts and could expand into a government-wide ban.
Anthropic—best known for its Claude AI assistant—argues the move is punitive. The company says it refused to remove internal safeguards that restrict its AI from being used in military surveillance or autonomous weapons systems.
Executives warn the fallout could cost billions in lost revenue and severely damage the company’s reputation.
Free Speech vs National Security
At the heart of the dispute is a broader constitutional fight. Anthropic claims the blacklisting violates its First Amendment rights and Fifth Amendment protections, arguing it is being punished for its stance on AI ethics.
The company also says it was not given a fair opportunity to challenge the designation before it was imposed.
However, the Justice Department disputes those claims. Officials—including Acting Attorney General Todd Blanche—argue the decision is based strictly on contract compliance, not ideology. They maintain that Anthropic’s refusal to loosen AI usage restrictions creates unacceptable risks for military operations.
Conflicting Court Decisions Raise Stakes
The ruling sharply contrasts with a March 26 decision from a California federal court, which temporarily blocked a related Pentagon order. That judge suggested the government’s actions may amount to unlawful retaliation against the company’s views.
The conflicting decisions now set up a legal showdown that could escalate to higher courts.
Why This Case Matters
This marks the first known instance of a U.S.-based company being labeled a supply-chain risk under laws typically used to guard against foreign threats.
The outcome could have far-reaching implications:
How AI companies set ethical limits on their technology
Government authority over private tech firms
The future of defense contracting in the AI era
With billions at stake and fundamental questions about AI governance on the line, the case is poised to become a landmark battle at the intersection of technology, national security, and constitutional rights.