Anthropic's chief executive, Dario Amodei, said on February 26 that the company will not accede to the Pentagon's demand for unrestricted use of its large language model, Claude. The refusal came after what U.S. political outlet reports described as intense pressure from the Department of Defense, which allegedly threatened to label Anthropic a "supply chain risk" and to invoke the Defense Production Act if the company did not lift its safety constraints.
Amodei framed the company's stance in moral and technical terms, acknowledging that military decision-making appropriately lies with the U.S. Department of Defense while arguing that certain applications of AI are beyond current safety guarantees. He specified two categories that Anthropic will not countenance: mass domestic surveillance within the United States, and fully autonomous weapon systems that select and engage targets without human intervention.
According to Amodei, the Pentagon informed contractors it would only sign agreements with firms willing to permit AI use for "any lawful purpose" and to remove certain safeguards in the scenarios Anthropic rejects. The company says the threats included removing Anthropic from defence procurement channels and applying a label historically used to describe foreign adversaries' suppliers—a step that, if taken, would be unprecedented for a U.S. firm.
The confrontation intensified after media reports that Claude was used by U.S. forces in an operation involving Venezuela. Anthropic sought confirmation from the Pentagon and expressed concern, turning a procedural procurement dispute into a flashpoint over the ethics of military AI use. A meeting between Amodei and Defense Secretary Hegseth on February 24 failed to resolve the impasse.
The episode underlines a broader tension between national security imperatives and the tech industry's growing insistence on ethical constraints. Big AI firms are simultaneously courting government contracts and cultivating public trust; when those aims collide over surveillance or autonomous weapons, companies face hard choices with reputational and commercial consequences.
For policymakers, the standoff presents dilemmas of its own. Removing a domestic supplier from defense supply chains would create political blowback and could incentivize different approaches from competitors and open-source actors, while coercive use of procurement powers could harden resistance within the tech sector and complicate long-term collaboration on safe AI deployment.
