Anthropic, a San Francisco–based artificial intelligence startup, has filed suit in federal court in California challenging a rare Pentagon decision to designate the company as a “supply-chain risk.” The complaint, lodged this week, calls the designation “unprecedented and unlawful,” alleging that the action has already led to the cancellation of federal contracts, imperiled future commercial deals worth potentially hundreds of millions of dollars, and inflicted reputational harm that cannot be repaired without judicial relief.
The dispute marks a dramatic rupture between a company that until recently was an early collaborator with the U.S. government and the administration now led by Donald Trump. Anthropic’s Claude model had been integrated deeply into Department of Defense systems and was, until the recent move, the only commercial AI model approved for use on classified networks. The company points to a $200 million contract signed in 2025 as proof of its previous standing as a trusted supplier; the Pentagon’s new requirement forces defence contractors to certify that their systems do not rely on Anthropic models.
At the heart of the confrontation are sharply divergent views on operational control and ethical guardrails. The Defense Department insists it must be able to use deployed AI “for all lawful purposes” without vendor-imposed restrictions, arguing that limits on functionality could endanger service members and hinder commanders’ ability to act. Anthropic counters that it will not allow its models to be used in fully autonomous weapons or for broad domestic surveillance, and that the government’s action has violated its rights and business interests.
The designation and the ensuing lawsuit carry wider significance beyond a single corporate-government spat. The “supply-chain risk” label is historically reserved for companies tied to foreign adversaries, so applying it to a U.S. AI firm sets a new precedent for how national-security tools are defined and policed. It also signals a potential shift toward politicised gatekeeping of suppliers whose technologies straddle commercial and defence uses, raising questions about how allies and private partners will view engagement with U.S. AI vendors.
For the Pentagon, severing or constraining ties with a provider already embedded in classified systems poses immediate operational dilemmas. Forced migration away from Claude or other contested models would be costly, time-consuming and might temporarily degrade capabilities that military planners have come to rely on. For the technology industry, the episode intensifies an existing tension: companies that try to retain ethical limits on their products risk being frozen out of sensitive but lucrative government contracts, while unfettered access invites public and regulatory scrutiny.
Anthropic is seeking a court order rescinding the designation and a temporary injunction to block the Pentagon’s measures while litigation proceeds. The case will test how American courts balance executive branch national-security prerogatives against due-process and commercial harms, and it could help define the legal contours of private-sector responsibility over dual-use AI systems. Either outcome will reverberate through the defence–tech ecosystem and influence how the United States governs advanced AI at the intersection of security and civil liberties.
