At the start of March, the US National Security Commission on AI (NSCAI), chaired by Eric Schmidt, former CEO of Google and Robert Work, former Deputy Secretary of Defense, issued its 756-page final report. It argues that the US is in danger of losing its technological competitive advantage to China, if it does not massively increase its investment in AI. It claims that

For the first time since World War II, America’s technological predominance—the backbone of its economic and military power—is under threat. China possesses the might, talent, and ambition to surpass the United States as the world’s leader in AI in the next decade if current trends do not change.

At the same time, it highlights the immediate danger posed to US national security by both China’s and Russia’s more enthusiastic use of (and investment in) AI, noting for instance the use of AI and AI-enabled systems in cyberattacks and disinformation campaigns.

In this post, I want to focus on one particular part of the report – its discussion of Lethal Autonomous Weapons Systems (LAWS) – which already received some alarmed headlines before the report was even published. Whilst one of the biggest  challenges posed by AI from a national security perspective is its “dual use” nature, meaning that many applications have both civilian and military uses, the development of LAWS has over the past decade or so been at the forefront of many people’s worries about the development of AI thanks to the work of the Campaign to Stop Killer Robots and similar groups.

The problem with LAWS

Why do people argue that LAWS need to be banned before they’ve even been developed? According to a 2017 open letter, signed by Elon Musk and over 100 other AI researchers and entrepreneurs, claims that

Once developed, [LAWS] will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.

We are still a little way off Terminator-style “killer robots”. However, ever since the Second World War, increasing automation – and ultimately autonomy – has characterised the development of new weapons systems. This is a trend that looks set to continue, with a real risk of turning into an arms race.

One of the main reasons people worry about LAWS is their ability to select, engage, and potentially kill, humans without human input. There is something deeply unnerving about relinquishing the decision to kill to a robot, whether it’s a “killer robot” or an AI-enabled sentry gun. Some of this may be due to the lingering influence of sci-fi, but there are nevertheless important practical, legal and, crucially, moral issues raised by LAWS.

One important concern is to do with the so-called “responsibility gap” – if a killer robot kills a civilian by mistake, who can be held responsible? Who can be held legally liable? One potential issue with LAWS is that they might end up being too autonomous to hold the person who decided to deploy one liable, but not autonomous enough – crucially, lacking consciousness – to hold it liable. This means that, potentially, war-crimes may end up going unpunished, and victims (or victims’ families) miss out on compensation or redress.

Another issue is to do, as the open letter suggested, with the speed at which LAWS can make a decision, to the point that meaningful human control may become impossible. There have already been well-publicised accidents with automated systems which should serve as a warning. For instance, in 2003 a RAF fighter jet was shot down by a Patriot missile battery in Iraq:

[T]he Patriot system is nearly autonomous, with only the final launch decision requiring human interaction. [This demonstrates] the extent to which even a relatively simple weapons system – comprising of elements such as radar and a number of automated functions meant to assist human operators – deeply compromises an understanding of MHC (meaningful human control) where a human operator has all required information to make an independent, informed decision that might contradict technologically generated data.

In other words, because of the speed and complexity of AI systems’ decision making, even when humans are ultimately “in” or “on” the loop, it may not always be possible to grasp all the information or to act quickly enough if a mistake is detected.

Finally, there is a concern that LAWS would make the decision to go to war too easy, because states would not need to worry about risking their own citizens’ lives – they can send robots instead of soldiers. LAWS may not only become “weapons of first resort”, but war using LAWS might become the first resort of conflict resolution. Or – perhaps even more worrying – LAWS may become a weapon of choice in non-military context as well: for instance, if LAWS are used by the US armed forces, it is only a matter of time before they end up in the hands of US law enforcement.

 

What does the NSCAI report say?

The report argues that the US as an obligation to develop LAWS, as both China and Russia appear to be seriously pursuing them already. It dismisses attempts to ban LAWS outright, though it is sympathetic to efforts to limit proliferation and explicitly requires that “human judgment must be involved in decisions to take human life in armed conflict”.

A summary of the 4 main judgments regarding the use of LAWS reached by the NSCAI

The key judgments regarding the use of LAWS reached by the NSCAI

The NSCAI report specifically addresses some of the most prominent worries about LAWS. In particular, it “endorses [the US’ Department of Defense’s] body of policy that states that human judgment must be involved in decisions to take human life in armed conflict”. This takes some of the sting out of the responsibility-gap worry. That said, the degree of human involvement will naturally differ depending on context – in urban areas, where the situation changes often and rapidly, human authorisation and oversight may be needed much more often than in less populated, more predictable areas (the report suggests space or underwater). This means that it needs to be established, preferably in advance, which kinds of battlefield warrant which level of human oversight.

Either way, the report recognises the importance of meaningful human control and states that “[i]t is incumbent upon states to establish processes which ensure that appropriate levels of human judgment are relied upon in the use of AI-enabled and autonomous weapon systems and that human operators of such systems remain accountable for the results of their employment”. It also urges the US to lead the way in establishing internationally accepted protocols and rules regarding the use of LAWS and to pursue technical means to verify compliance with any future arms control agreements (e.g. relating to nuclear weapons).

In addition to recognising the importance of human control, the report commends the US military for its existing precautions and its attempts to ensure that autonomous weapons systems undergo sufficient test and evaluation, verification and validation (TEVV) before being deployed.  It takes this as further proof that it is incumbent on the US to both continue to conduct research in LAWS and to resist attempts to ban them completely: the difficulty of enforcing such a ban would mean that countries without such rigorous TEVV protocols would probably still end up developing LAWS anyway.

A chart outlining the key suggestions for mitigating the risk of LAWS

The NSCAI report suggestions for mitigating the risks of LAWS

Although the NSCAI takes into account a number of important objections to LAWS and seeks to accommodate them, there are also a few important gaps. Most importantly, it does little to address one of the main worries of groups such as the Campaign to Stop Killer Robots, namely the worry that proliferation of LAWS and similar technologies would make war too easy. As noted, the report acknowledges the fear that AI tools generally may become the weapons of first resort in future conflicts, but its focus here is more so on non- and less lethal uses of AI (e.g. cyber attacks and disinformation), than on LAWS.

It is true, of course, that if countries such as China and Russia are going to be developing LAWS it might be all things considered better if the US developed LAWS also – this is the very logic of arms races – but all in all, there is little in the report to reassure those worried about the degradation of the last resort requirement governing the use of force. If states need to worry less about risking their own soldiers’ lives, states may be much more tempted to resort to force and ignore other less-lethal solutions. Banning LAWS outright is not likely to prevent this – as the report highlights, there would be significant problems with monitoring and verifying compliance – but it is not clear whether the recommendations in the report are sufficient to prevent this further watering-down of last resort. It would likely require a concerted international and multilateral effort to shore up the last resort requirement, and it is not clear which organisation, if any, would be in a position to take the lead on this.

 

 

Sara Van Goozen

I am a lecturer in political philosophy at the University of York. My research interests are in global ethics, just war theory and global justice. My book “Distributing the Harm of Just Wars” is out now with Routledge.
I am the editor of Justice Everywhere’s series on pedagogy and the practice of teaching philosophy, Teaching Philosophy in the 21st Century.

Twitter