Benjamin Bestgen: LAWs
Benjamin Bestgen takes a look this week at robotic weapons and the law. See last week’s primer here.
Killer robots, or “Lethal Autonomous Weapons” (LAWs), have been in our popular conscience for decades. Science fiction fans are familiar with Isaac Asimov’s Laws of Robotics and most of us know the Terminator franchise, Star Wars battle droids or the octopus-like Sentinels in The Matrix. Some of us also recall military-robot turned loveable humanoid Johnny 5 in Short Circuit or the Cylon Centurions in Battlestar Galactica. Video games like Fallout or Halo include combat machines of various degrees of autonomy and complexity.
In real life, militaries across the world invest in increasingly sophisticated machinery to conduct military missions with little or no human input required, for purely defensive purposes, reconnaissance and surveillance or broadscale attacks and targeted assassinations.
The most well-known LAWs to date are flying drones and unmanned land or water vehicles capable of being navigated either remotely by a human operator or independently by their own programming. As an example of what else may already be possible with today’s technologies, computer scientist Stuart Russell is behind the viral video Slaughterbots, which was partially shot in Edinburgh and also screened at the UN Convention On Certain Conventional Weapons in Geneva (2017).
Autonomy
The keyword in LAWs is “autonomous”. The concept is not easily defined but a common feature is the idea that autonomy includes capabilities for independent decision-making or exercising judgement without external control of that decision or judgement.
Autonomy is not usually an absolute term but comes in degrees. A five-year-old child may be able to help itself to chocolate or get dressed without needing assistance or external oversight. But for more complex tasks, like going to the doctor, deciding which school to attend or judging the appropriateness of a movie, the child would likely be overwhelmed by the task or incapable to even conceive of it.
For LAWs, the UK’s Ministry of Defence provides a working definition of “autonomy” when discussing flying drones: “An autonomous system is capable of understanding higher-level intent and direction. From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control, although these may still be present. Although the overall activity of an autonomous unmanned aircraft will be predictable, individual actions may not be.”
Currently existing LAWs still seem to occupy a space between “very sophisticated automation” and actual “autonomy”. Unmanned drones or “fire & forget” missile systems are increasingly able to either accomplish their mission, return to base or self-destruct with little human input. But these are not yet “intelligent machines” capable of learning, unpredictable decision-making or understanding their environment, mission or purpose in any meaningful sense of “understanding”. Their autonomy, if they have any, remains limited – so far.
LAWs and the law
International law and the laws of warfare currently don’t have a clear position on the future of “killer robots”. UN Secretary-General Antonio Guterres, the EU Parliament and NGOs like the Campaign to Stop Killer Robots call for a ban. Countries like the US, UK, Russia or Israel (notable for their powerful weapons and technology industries) resist bans, claiming current regulations suffice for the time being.
Some argue LAWs could make warfare safer, cheaper and cleaner (at least for countries who can afford developing or acquiring LAWs): machines don’t rape, loot, torture, take drugs, panic or hate, reducing the risk of war crimes. They also don’t sleep, eat, require entertainment, suffer PTSD or have to be psychologically conditioned to kill by extensive military training.
Critics counter that LAWs place too much distance between human decision-makers and armed conflict, potentially reducing psychological, political and ethical hurdles for engaging in warfare. Taking up arms should be difficult, unpopular and traumatic, so we truly use it as a measure of absolute necessity and last resort, not something we ponder lightly because “they’re just machines”.
LAWs can also fall in the hands of criminals, terrorists or corporations, using them covertly, unregulated and for nefarious purposes. But then, so can most other weapons we currently manufacture.
Further questions are important to ask, ideally before we encounter an actually autonomous weapon like the Terminator on the market:
-
Who is liable for the decisions and actions of a LAW? Imagine a LAW classifies a peaceful civilian shepherd holding a rifle as an enemy combatant and kills him. A human soldier with no relevant excuse or explanation for this would face court-martial, with possibly additional repercussions for the commanding officer. But the LAW? Does liability fall on the developer and programmer, the person, state or organisation owning the LAW or the person(s) authorising its use, activation and mission?
-
Should LAWs only be used by the military, or also by state-contracted mercenaries or civil law enforcement, such as police? Consider surveillance or crowd control drones with the ability to deploy teargas, rubber bullets or electric shocks. Could robots make lawful arrests?
-
Can LAWs reliably be made to abide by international humanitarian laws, such as not firing on medics or parachutists exiting downed airplanes, bombing schools, hospitals or more generally distinguish between combatants and non-combatants?
Article 36 of Protocol I of the Geneva Conventions obligates states to assess any new weaponry whether it complies with the laws of warfare and international law more broadly. It’ll be interesting to see how LAWs will fare in such assessments – and where and why opinions differ.
Benjamin Bestgen is a solicitor and notary public (qualified in Scotland). He also holds a Master of Arts degree in philosophy and tutored in practical philosophy and jurisprudence at the Goethe Universität Frankfurt am Main and the University of Edinburgh.