President Donald Trump has directed every U.S. federal agency to immediately stop using technologies supplied by Anthropic, the San Francisco-based artificial intelligence firm, in a move that sharpens political pressure on the domestic AI sector.
The presidential instruction, carried in Chinese social media reports of the White House announcement, left an opening for a six-month phase-out only for agencies described as using Anthropic products for defence-related work. The statement warned that Anthropic must cooperate during the transition or face the full force of presidential powers — including unspecified civil and criminal consequences.
The order signals a dramatic escalation in Washington’s posture toward privately developed AI systems. Anthropic, founded by former OpenAI researchers and best known for its Claude family of large language models, has positioned safety and measured deployment at the centre of its pitch to customers and regulators. Until now, U.S. debate over AI procurement has focused on standards, testing and conditional approvals rather than blanket prohibitions.
Taken at face value, a presidential directive can change what agencies buy and deploy, but it collides with a complex web of procurement law, existing contracts and operational needs. Federal buying rules, inspector-general oversight and judicial review all act as counterweights when the executive branch seeks rapid, across-the-board changes to defence and civilian systems that rely on commercial suppliers.
Practically, the order poses immediate questions for agencies that had trialled or integrated Anthropic products: how to migrate critical systems, whether alternatives — from OpenAI, Google, Microsoft or internal tooling — meet the same requirements, and who pays for the transition. The six-month window offered to ‘‘war-department-type’’ agencies recognises that some deployments are embedded in logistics, intelligence or command-support systems and cannot be ripped out without disruption.
The administration’s rhetoric also carries a political subtext. The threat of criminal or civil penalties, if pursued, would move the dispute from procurement policy into litigation and potentially criminal law — an uncommon escalation against a private technology firm. That prospect increases regulatory uncertainty across the AI market at a time when firms are already navigating export controls, liability concerns and divergent expectations from allies.
For Anthropic the stakes are immediate. The company has said it will contest the ban and pursue appeals, raising the prospect of protracted legal battles that could force courts to clarify the limits of presidential authority over executive-branch contracting. For other AI vendors, the episode underlines how quickly commercial partnerships with government can become entangled in partisan politics.
Internationally, the order risks further fragmenting the global AI ecosystem. U.S. allies that rely on American models for research or classified projects may face pressure to diversify suppliers or to press for clearer standards that insulate government procurement from political swings. Meanwhile, rival states can cite such actions as evidence that democracies struggle to provide stable, trusted AI services to governments.
Whatever the legal outcome, the directive will reverberate through procurement offices, start-up valuations and defence planning. Agencies must scramble to assess exposure, re-run risk assessments and negotiate new contracts, while Anthropic and its investors will have to weigh the reputational and financial damage of being publicly singled out by the president. The incident marks a new and more confrontational phase in how Washington governs advanced AI tools.
