The Iran School Bombing - It was not an AI problem

A few days ago my good friend Tom Chan shared an article from the Guardian with me (I have linked the article at the end of this post) which goes into the deeper systemic problems of military targeting that contribute to false positives and consequentially result in increased civilian harm. The article talks about this through the lens of AI, focusing on the Iran school bombing, highlighting the fundamental issue at play - human decision making.

On the 28 February 2026, American military forces targeted and struck the Shajareh Tayyebeh primary school in Minab, southern Iran, killing between 165-180 people (the reporting on the number of casualties has varied between this range across different media sources), most of them young girls between the ages of 7 and 12 (the reporting on this fact has been consistent across different media sources). When questioned about the attack, American officials claimed Iran were responsible for the attack, blaming their “inaccurate munitions”. A claim which has since been refuted, with evidence determining the attack was conducted by the U.S. The decision to attempt to shift the blame of this indefensible attack is rooted, not only in hubris, but also in poor decision making, which ironically is the root cause of the attack.

The Shajareh Tayyebeh school was previously classified as a military facility in a Defense Intelligence Agency database which had not been updated. That building had been separated and converted into a school, a change which was never captured because the database was not updated, and not properly checked or verified.

There has been a lot of noise in the media around the question, is AI to blame?. The Guardian article, written by Kevin T Baker, which inspired this blog post, points out that this is not the point we should be focusing on, rather, we should be asking, why was the database not checked or updated?

One of the questions I always ask in my research or in public forums is - is it an AI problem, or is it just a problem? The root causes of many issues are seldom a result of AI and more often the result of underlying problems which manifest in ways we are unfamiliar or uncomfortable with when AI is involved.

The attack on the Shajareh Tayyebeh primary school was not an AI problem. It was the result of systemic failures within U.S military targeting practices, poor human judgement and lazy human decision making. Even the most accurate or precise AI tool could not compensate for these fundamental shortcomings.

The public echoes from the Shajareh Tayyebeh school attack have spotlighted the use of AI for targeting, an outcome which is….complicated. While the use of AI for targeting and for supporting decisions on resort to force decision making do introduce unique complexities (I wrote about three of these complexities in a recent publication which you can read here); the focus on AI conveniently over shadows the accountability of human decision making, which ultimately is the root cause of the problem.

I’m currently working on another paper with my colleague Adam Hepworth which explores how the acquisition and use of AI narrows and omits critical military processes and decision making opportunities which ultimately impact safety through limited opportunities for human intervention. The attack on the Shajareh Tayyebeh primary school is a devastating example of what happens when speed is prioritised in safety critical domains, the nature of which demand care and consideration in decision making.

Decision making requires time. Military operations are often framed as time sensitive. When you combine two time dependent elements together and attempt to obfuscate the significance and necessity of time, you ultimately end up with futile outcomes. In their haste to attack Iran, the U.S. unjustly and unnecessarily took the lives of innocent young girls. While this attack is undoubtedly devastating to the people of Iran, it did not impact their military capacity to respond. This attack calls into question the U.S.’s capacity for strategic oversight and basic military decision making. Whether AI was involved or not does not erase these facts. All AI did in this situation was condense the timeframe for a stupid decision.

There are a lot of challenges and complexities that come with the use of increased autonomy and AI in military operations; however, these concerns should not overshadow existing issues with human decision making in military operations. If we are not willing to address the root cause of a problem, any efforts to address said problem will only ever act as a band-aid.

You can read the Guardian article, written by Kevin T Baker, here: AI got the blame for the Iran school bombing. The truth is far more worrying by Kevin T Baker

Next
Next

Australia’s new military AI policy comes at a crucial time. The challenge is turning it into practice