A wave of breathless commentary has claimed that the recent US strikes on Iran marked the debut of an autonomous, AI-driven “kill chain.” The viral accounts describe an AI that traced, decided and executed at machine speed — a vivid image that feeds deep public anxieties about artificial intelligence in warfare.
A careful reading of open reporting and industry disclosures paints a different, more prosaic picture. U.S. forces have been using large language and data models as intelligence assistants: Claude — an Anthropic model adapted for government work and deployed alongside Palantir systems — has reportedly been used to analyse intelligence, identify potential targets and run battle‑space simulations. Those uses amount to synthesis and prioritisation of vast streams of data, not independent authority to launch weapons.
The technical logic is straightforward. Modern battlefields generate an avalanche of inputs — satellite imagery, drone video, radar traces, flight telemetry, electronic intercepts and human reporting in many languages and dialects. Historically, analysts manually fused those feeds; today, large models accelerate that synthesis. Feed a government‑configured Claude a trove of signals and documents and it can quickly surface patterns, produce candidate assessments and draft operational options, saving human analysts hours or days of labor.
That capability matters, but it is not the same as autonomous targeting. Industry safeguards and company policies have become central flashpoints. Anthropic built a government‑specific model, Claude Gov, with features and controls intended for sensitive data. The company has also publicly set red lines — bans on mass domestic surveillance and the development of fully autonomous lethal systems — that have put it at odds with Pentagon officials who pressed for unfettered operational control.
The clash has been political as much as technical. Bloomberg and other outlets reported tense exchanges between Pentagon staff and Anthropic executives over hypothetical, worst‑case scenarios — one such exchange asked whether a private company’s safety filters could refuse an emergency launch command. That friction culminated in a White House directive curbing federal agencies’ ties to Anthropic and a Pentagon designation of the firm as a supply‑chain risk. Yet, even as officials moved to restrict future work, they kept an interim waiver that allowed continued use of Claude during a transition period when it was still employed in strike planning.
The practical upshot is that AI in this episode functioned as an intelligence multiplier rather than an independent actor. Human commanders still bore responsibility for rules of engagement and for the final decision to strike. The immediate advantage was speed and scale: models can compress tens of thousands of pages, imagery frames and signal traces into digestible assessments, reducing the time between collection and action.
That advantage, however, brings hazards. Models can over‑fit to biased training data, misinterpret signal noise as intent, or be vulnerable to adversarial manipulation and data‑integrity failures. Dependence on single vendors creates strategic vulnerabilities — the very reason the Pentagon labelled Anthropic a supply‑chain risk. Political interventions, export controls and procurement rules will all shape how reliably and safely militaries can adopt AI tools.
Across the private sector, rivals are pivoting. OpenAI, XAI and other firms are courting defence contracts as Anthropic’s access tightens. That competition will accelerate military demand for bespoke, auditable models, while also catalysing government efforts to set standards for verification, human‑in‑the‑loop guarantees and liability.
For global audiences, the lesson is twofold: first, the spectre of independent robot generals running kill chains is still largely science fiction; second, the transformative effect of AI on warfare is real and already changing how intelligence is produced and consumed. The central questions now are not whether militaries will use AI, but how they will govern it, distribute control between states and private firms, and mitigate the escalation risks that faster decision cycles can bring.
