A U.S. strike on February 28 that destroyed a girls' primary school in Minab, southern Iran, has left more than 170 people dead, the vast majority children, and provoked global condemnation. Preliminary U.S. investigations reported by the New York Times and the Washington Post indicate the strike was carried out by U.S. forces and that an erroneous set of coordinates — drawn from older Defence Intelligence Agency records — may have turned a civilian school into a military target.
Satellite imagery and mapping data show the building once formed part of a wider military compound but, after 2017, was separated by a new wall and painted in bright, civilian colours; several online maps and the school’s own website had already identified the site as an educational institution. The apparent mismatch between observable civilian markers and the coordinates used for the strike has intensified questions about how targets were selected and verified in the campaign against Iran.
Reporting indicates that some of the target locations for the U.S.-Israeli operation were supplied by Israeli intelligence and that the Israel Defence Forces spent “thousands upon thousands of hours” compiling a list of potential targets. Israel denies any involvement in the Minab hit. Two Israeli officials told U.S. outlets they had not cross-checked or discussed the specific coordinate that led to the school’s destruction, and U.S. commanders reportedly used Defence Intelligence Agency data to set the strike point.
The episode has reopened debate over the role of artificial intelligence and automated tools in modern targeting. U.S. and Israeli forces use Palantir’s Maven intelligence-planning platform, and some parts of the U.S. toolchain have embedded the Claude model, an AI system developed by Anthropic. Journalists and analysts have suggested that the platform’s capacity to ingest large volumes of sensor and intelligence data, generate prioritized lists and propose precise coordinates can speed operations but also risks amplifying stale or incorrect inputs.
U.S. procedural safeguards should, in theory, have prevented such an outcome: strike recommendations, whether assisted by AI or not, typically require multilayered verification and approval by senior officers. Yet officials described to the press a rapid operational tempo and a target list that ballooned by hundreds of entries in the weeks before the strikes, raising the prospect that some long-standing entries in a database were not re-verified against current, ground-level imagery and local knowledge.
Defenders of the tools and processes have stressed the human role in final decisions, arguing that AI is an aid rather than an arbiter. Former military officials who helped integrate AI systems into planning warn that models are only as reliable as the data fed to them and that staffing shortages can leave analysts unable to refresh databases. Palantir and Anthropic declined to comment for the reports, and the Pentagon has referred questions to U.S. Central Command, which says the inquiry is ongoing.
Beyond the technical and procedural explanations, the incident carries heavy strategic costs. The civilian toll and the targeting questions will complicate U.S. efforts to hold broad international support for the wider campaign against Iran, harden Iranian public opinion and give Tehran and its allies moral and propaganda leverage. Domestically, U.S. political leaders face renewed scrutiny over the decision-making that led to strikes, the oversight of intelligence products and the delegation of lethal force in fast-moving conflicts.
Legal and ethical disputes over autonomous and semi-autonomous systems in warfare are likely to intensify. If AI-augmented platforms become standard for identifying and tasking strikes, states will confront pressure to mandate audit trails, clear human-in-the-loop protocols and rigorous, regularly updated data governance. Absent such reforms, the risk of catastrophic mistakes — and their attendant political fallout — will grow as militaries worldwide embrace tools that accelerate targeting decisions.
For now, the Minab attack remains the focal point of an investigation that will seek to determine who approved the coordinates, whether the database entry predated the campaign and why ground-level indicators of civilian use appear not to have been heeded. The outcome will matter not only for accountability in this case but also for the rules and safeguards that govern how intelligence, software and commanders combine on future battlefields.
