Killer Robots - Autonomous Weapons and Their Compliance with IHL
Abstract: The pursuit of weapons which distance the soldier from the actual battlefield has been going on ever since the transition from the waging of war using short blades, to the waging of war using bow and arrow. Today, that ambition has reached an almost completion with the ever-increasing number of unmanned, remote-controlled vehicles that are rapidly becoming the most common and prominent method of waging wars. Political incentives of cutting costs of warfare and sparing the lives of soldiers create the last push towards full autonomy. The emergence of increasingly autonomous weapons (AWs) has already generated a heated debate on the legality of these weapons, and two very polarized sides can be easily discerned. The purpose of this thesis is to examine and analyze this debate, to look into the arguments put forth regarding the legality or illegality of autonomous weapons, and examine where the positions are in the debate. Focus is on the three fundamental principles in International Humanitarian Law (IHL): distinction, proportionality and precaution, and I discuss the arguments in both directions. Proponents often claim the ability of AWs to comply with IHL, with the development of sensors, algorithms, software and artificial intelligence (AI), which would allow the machine to satisfactorily distinguish between civilians and combatants, carry out proportionality assessments and to take the required precautions in its actions. Opponents instead argue that the development of AI has overpromised before, that sensors could never be able to distinguish between civilians and combatants in a contemporary battlefield and that proportionality and precaution assessments require a contextual understanding that only humans are capable of. The fundamental disagreement seems to lie in the uncertainty of the development of the software and technology, and the capability of machines to perform as well, or better than, humans. The issue of accountability is also examined in terms of what happens with the responsibility for breaches of IHL when we have assigned the task of targeting and firing, essentially, the life-and-death decision, to a machine. Different propositions such as placing the accountability onto the commander, programmer, manufacturer or even the machine itself are discussed. Issues relating to the moral and ethical aspects of changing the agents of war from humans to robots are also examined, and the possible consequences this might entail – both from a separate moral perspective and as part of the legality assessment, in terms of what would happen with the applicability of IHL if we would change the agents in war. After having examined the debate on legality of AWs, some concluding remarks are drawn on what we are to do with the debate in the near future, where I present some of the more prominently discussed ways forward in terms of handling the emergence of these weapons. Finally, I end with some of my own reflections on what I have found in my analysis of the current debate, and what I believe are the more important aspects to continue discussing in the ongoing debate on the legality of autonomous weapons.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)