The United Nations is trying to break new ground as it grapples with the persisting lack of clarity about killer robots and their military and ethical implications for warfare and humankind as a whole.
A first-of-its-kind UN conference laid some of the groundwork for in-depth discussions among governments with the participation of companies, academics and non-governmental groups. But its own future is still in jeopardy.
The new UN Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (often called killer robots) reportedly brought together about 450 participants, including 86 governments.
The early-stage meeting heard views on whether so-called killer robots are a near-term threat to humankind and deserve an outright ban, lesser limitations or no action yet. About 22 governments supported calls for an outright ban.
Clarity on such issues is gaining urgency because giant leaps in artificial intelligence and big data sets are forging new pathways in many human endeavors.
Some of those include industrial robots that cause job losses and very smart go-it-alone weapons that autonomously kill enemies without harming defending forces or causing collateral civilian casualties and unnecessary material destruction.
They also raise difficult longer-term questions about intelligent machines becoming masters of their creators rather than the other way round.
Above all, nobody seems precisely clear on what Lethal Autonomous Weapons Systems (LAWS) are and how they might be characterized and defined. Or even whether such weapons exist in secret armories, might reliably exist soon or even exist at all because of the complex human-like judgements involved in autonomously delivering lethal impacts.
A core issue is whether any weapon can be trusted to distinguish between friend and foe without human intervention, simply because it is guided by on-the-go self-learning or programmed artificial intelligence capabilities.
Recognizing the importance of such questions, the new group was established in 2016 as an offshoot of the 1980 Convention on Certain Conventional Weapons (CCW), which has 125 contracting governments including the US, Russia, China, France and Britain.
The CCW restricts use of certain conventional weapons which “may be deemed to be excessively injurious or to have indiscriminate effects”.
A 1995 protocol of CCW banning blinding lasers is an example of a weapon being preemptively banned before it was acquired or used. Opponents of killer robots want a similar preemptive ban.
About 120 artificial intelligence entrepreneurs sounded very loud alarms bells in August. They wrote a letter to CCW signed among others by Elon Musk, founder of Tesla, SpaceX, OpenAI (USA), Mustafa Suleyman, founder of Google’s DeepMind (UK), and Jüergen Schmidhuber, leading deep learning expert and founder of Nnaisense (Switzerland).
The letter said, “Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”
“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act.
“Once this Pandora’s box is opened, it will be hard to close. We therefore implore the High Contracting Parties (of CCW) to find a way to protect us all from these dangers.”
Russian President Vladimir Putin further muddied the waters in September referring to artificial intelligences (not lethal autonomous weapons). He warned that “the one who becomes the leader in this sphere will be the ruler of the world.”
In the same speech, he seemed to say that future wars could consist of battles between autonomous drones.
Commenting on calls for an outright ban, GGE chair Amandeep Singh Gill of India noted, “It would be very easy to just legislate a ban: whatever it is, let’s just ban it. But I think that we, as responsible actors in the international domain, we have to be clear about what it is that we are legislating on.”
“What is of concern is the distance between the human and the machine, the increasing distance, and whether that has qualitatively changed with the new technologies or not, and whether that qualitative change requires us to have a different approach to military systems with autonomy.”
The International Committee for Robot Arms Control, a founding member of the Campaign to Stop Killer Robots, suggested core focus should be on human control of weapons systems rather than on artificial intelligence or different levels of autonomy.
Autonomous weapons systems should be defined simply as weapons systems that once launched can select targets and apply violent force without meaningful human control.
The Campaign to Stop Killer Robots called for two more GGE meetings in 2018 leading up to international negotiations on a legally binding protocol by the end of 2019. It would ban the development, production, and use of fully autonomous weapons.
Moves to agreement are possible because issues related to LAWS were discussed for nearly four years in various official forums before setting up the new group of experts.
But the GGE meeting was already postponed once because of funding shortage and may have trouble getting off the ground in robust fashion since the US, Russia and China would prefer to avoid talking about a ban.
Many others think that the big three are secretly developing such weapons and moral pressure through an early ban might deter them a little.