Outdated Intelligence, Rapid Targeting and AI: How a U.S. Strike Hit an Iranian School

A U.S. strike on a girls' primary school in Minab, Iran, killed more than 170 people and preliminary investigations suggest the strike used outdated Defence Intelligence Agency coordinates. The case exposes flaws in intelligence maintenance, rapid targeting practices and the growing use of AI-assisted planning tools, raising questions about verification, command responsibility and the future role of automated systems in warfare.

Beechcraft E55 Baron aircraft flying under clear blue skies during daytime.

Key Takeaways

  • 1A Feb. 28 strike in Minab, Iran, destroyed a girls’ primary school and killed roughly 170 people, mostly children; U.S. forces are reported to have carried out the strike.
  • 2Preliminary U.S. inquiries indicate the strike coordinates may have come from outdated Defence Intelligence Agency data used by Central Command.
  • 3Satellite imagery and online maps had long identified the site as a school, suggesting the mis-hit could have been avoided with updated verification.
  • 4Palantir’s Maven platform — used by U.S. and Israeli forces and partly integrated with Anthropic's Claude AI model — figures centrally in debates over whether AI-augmented tools accelerated or amplified the error.
  • 5The incident raises strategic, legal and ethical questions about human oversight, data governance and alliance coordination in high-tempo military operations.

Editor's
Desk

Strategic Analysis

The Minab strike is symptomatic of a deeper friction at the heart of modern warfare: the appetite for speed and precision colliding with imperfect data and complex bureaucratic processes. AI and platforms like Palantir’s promise to turn scattered feeds into actionable options, but they also institutionalize the risk that stale or mislabelled inputs become lethal outputs. Politically, the incident will constrain U.S. freedom of action by eroding international and regional legitimacy, embolden adversaries’ narratives, and force allies to reckon with how shared intelligence is curated and cleared. Practically, expect immediate demands for stricter human-in-the-loop rules, mandatory data audits and legal reviews of automated targeting pipelines — reforms that will slow some operational ambitions but are necessary if militaries want to retain moral and legal authority in future conflicts.

China Daily Brief Editorial
Strategic Insight
China Daily Brief

A U.S. strike on February 28 that destroyed a girls' primary school in Minab, southern Iran, has left more than 170 people dead, the vast majority children, and provoked global condemnation. Preliminary U.S. investigations reported by the New York Times and the Washington Post indicate the strike was carried out by U.S. forces and that an erroneous set of coordinates — drawn from older Defence Intelligence Agency records — may have turned a civilian school into a military target.

Satellite imagery and mapping data show the building once formed part of a wider military compound but, after 2017, was separated by a new wall and painted in bright, civilian colours; several online maps and the school’s own website had already identified the site as an educational institution. The apparent mismatch between observable civilian markers and the coordinates used for the strike has intensified questions about how targets were selected and verified in the campaign against Iran.

Reporting indicates that some of the target locations for the U.S.-Israeli operation were supplied by Israeli intelligence and that the Israel Defence Forces spent “thousands upon thousands of hours” compiling a list of potential targets. Israel denies any involvement in the Minab hit. Two Israeli officials told U.S. outlets they had not cross-checked or discussed the specific coordinate that led to the school’s destruction, and U.S. commanders reportedly used Defence Intelligence Agency data to set the strike point.

The episode has reopened debate over the role of artificial intelligence and automated tools in modern targeting. U.S. and Israeli forces use Palantir’s Maven intelligence-planning platform, and some parts of the U.S. toolchain have embedded the Claude model, an AI system developed by Anthropic. Journalists and analysts have suggested that the platform’s capacity to ingest large volumes of sensor and intelligence data, generate prioritized lists and propose precise coordinates can speed operations but also risks amplifying stale or incorrect inputs.

U.S. procedural safeguards should, in theory, have prevented such an outcome: strike recommendations, whether assisted by AI or not, typically require multilayered verification and approval by senior officers. Yet officials described to the press a rapid operational tempo and a target list that ballooned by hundreds of entries in the weeks before the strikes, raising the prospect that some long-standing entries in a database were not re-verified against current, ground-level imagery and local knowledge.

Defenders of the tools and processes have stressed the human role in final decisions, arguing that AI is an aid rather than an arbiter. Former military officials who helped integrate AI systems into planning warn that models are only as reliable as the data fed to them and that staffing shortages can leave analysts unable to refresh databases. Palantir and Anthropic declined to comment for the reports, and the Pentagon has referred questions to U.S. Central Command, which says the inquiry is ongoing.

Beyond the technical and procedural explanations, the incident carries heavy strategic costs. The civilian toll and the targeting questions will complicate U.S. efforts to hold broad international support for the wider campaign against Iran, harden Iranian public opinion and give Tehran and its allies moral and propaganda leverage. Domestically, U.S. political leaders face renewed scrutiny over the decision-making that led to strikes, the oversight of intelligence products and the delegation of lethal force in fast-moving conflicts.

Legal and ethical disputes over autonomous and semi-autonomous systems in warfare are likely to intensify. If AI-augmented platforms become standard for identifying and tasking strikes, states will confront pressure to mandate audit trails, clear human-in-the-loop protocols and rigorous, regularly updated data governance. Absent such reforms, the risk of catastrophic mistakes — and their attendant political fallout — will grow as militaries worldwide embrace tools that accelerate targeting decisions.

For now, the Minab attack remains the focal point of an investigation that will seek to determine who approved the coordinates, whether the database entry predated the campaign and why ground-level indicators of civilian use appear not to have been heeded. The outcome will matter not only for accountability in this case but also for the rules and safeguards that govern how intelligence, software and commanders combine on future battlefields.

Share Article

Related Articles

📰
No related articles found