Response to ICRC Position on Autonomous Weapon Systems
Arguments against the use of autonomy in weapon systems have garnered more attention over the years, but how accurate are these arguments and what exactly defines a lethal autonomous weapon system?
The International Committee of the Red Cross (ICRC) released their position on lethal autonomous weapon systems in May [1]. The piece outlines concerns around autonomous weapon systems and proposes recommendations to nations for the regulation of these systems.
The ICRC position on autonomous weapon systems is framed around a concept that is not yet clearly scoped or defined. While a definition is provided, this definition is not a globally agreed upon standard definition. The definition provided by the ICRC is as follows:
“Autonomous weapon systems select and apply force to targets without human intervention. After initial activation or launch by a person, an autonomous weapon system self-initiates or triggers a strike in response to information from the environment received through sensors and on the basis of a generalized "target profile". This means that the user does not choose, or even know, the specific target(s) and the precise timing and/or location of the resulting application(s) of force.”
The arguments that have been put forward are not necessarily incorrect or irrelevant, rather they are framed against a lack of clarity in terminology and scope. It would be difficult to identify an example of an existing weapon system that fits the ICRC definition. This does not mean that a weapon of that category would not exist in the future, rather it indicates that the arguments put forward may be premature and potentially inaccurate.
Autonomy exists on a spectrum that spans from autonomous elements within systems, to systems that are entirely autonomous. An example of autonomous elements within a system is automatic ground collision avoidance systems for military aircraft, which is a system that can assume control in the event of a detected collision. An example of a fully autonomous system would be a weapons platform that can be deployed and subsequently complete a mission without any human intervention.
Arguments against autonomous weapons systems, such as the ICRC position, share a narrative of autonomous weapons systems being an imminent event with ethical ramifications. Interestingly, autonomy has been incorporated into weapon systems for a long time. Take for example the drip or “pop off” rifles used at Gallipoli. The ANZACS arranged rifles to fire automatically to help convince the Turkish opposition that the front line was occupied. This allowed the ANZACS to safely retreat. [2]
The level of human involvement in systems that incorporate autonomy varies significantly. For some systems, humans may play an active decision making role (e.g. guided munitions), for others a supervisory role (e.g. air and missile defense systems) and in some cases, humans have little to no influence on the system (e.g. loitering munitions).
The ICRC definition emphasises negative implications around the absence of human control of weapon systems. What’s interesting is that the concept of human control over the use of weapons has been changing and diminishing for a very long time. The notion of a lack of human control in warfare has been present in weapons that don’t encompass autonomy or automation, with examples going as far back as the catapult. Once the missile is launched, the artillerist can’t take it back.
This is also evident in more modern systems. Pilots utilise navigation systems to deploy weapons, in which case the navigation system is responsible for guiding the weapon to its target. Real time positioning of moving objects at altitude is complex, and while the accuracy has improved vastly over time, errors are still present and these errors can and have resulted in weapons being deployed to incorrect locations. Once the missile is launched, the pilot can’t take it back.
Arguments against the use of autonomy in weapon systems have garnered more attention over the years, specifically those arguments around the concept of humanity. The ICRC position on autonomous weapon systems claims that these systems raise “fundamental ethical concerns for humanity, in effect substituting human decisions about life and death with sensor, software and machine processes.”
The idea of human control seems to be glorified in some of these arguments as it is neither consistent with the realities of human control nor the prevalence of autonomy in existing weapon systems. Human beings are flawed. Despite our best intentions, we all have our biases, judgements and perceptions. The idea that human decision making equates to ethical decision making is not always true. If we want to debate the ethical concerns of humanity in war, the argument needs to extend beyond autonomous weapon systems, as ties to humanity are questionable for all forms of warfare.
Before moving forward with regulations and bans against autonomous weapon systems, we first must have a globally agreed upon definition that clearly articulates what an autonomous weapon system is. There also should be a better understanding of how autonomy is used in existing weapon systems and their capabilities and limitations. Finally, it’s important to understand what a change or reduction in human control over the use of weapon systems actually entails. The concept of humanity in warfare is complex and banning or limiting the use of autonomy in weapon systems will not simplify or resolve this ongoing debate.
References
[1] https://www.icrc.org/en/document/icrc-position-autonomous-weapon-systems
[2] https://www.awm.gov.au/articles/encyclopedia/gallipoli/drip_rifle