Back on July 28, 2015 the Future of Life Institute released an open letter expressing opposition to the development of autonomous weapons. As of this writing, you can still sign it. Although the name of the organization sounds like one I would use as a cover for an evil, world-ending cult in my Call of Cthulhu campaign, I assume this group is sincere in its professed values. While I do respect their position, I believe they are mistaken. I will assess and reply to the arguments in the letter.
As the letter notes, an autonomous weapon can select and engage targets without human intervention. A science fiction example of such a weapon is the claw of Philip K. Dick’s classic “Second Variety.” A real world example
, albeit a stupid one, is the land mine: they are placed and engage automatically.
The first main argument presented in the letter is a proliferation argument. If a major power pushes AI development, the other powers will also do so, creating an arms race. This will lead to the development of cheap, easy to mass-produce AI weapons. These weapons, it is claimed, will end up being acquired by terrorists, warlords, and dictators. These people will use these weapons for assassinations, destabilization, oppression and ethnic cleansing. That is, for what these people already use existing weapons to do. This raises concern about whether autonomous weapons would have a significant impact.
The authors of the letter have a reasonable point: as science fiction stories have long pointed out, killer robots tend to obey orders and they can (in fiction) be extremely effective. However, history has shown that terrorists, warlords, and dictators rarely have trouble finding humans who are willing to commit evil. Humans are also quite good at doing evil and although killer robots are awesomely competent in fiction, it remains to be seen if they will be better than humans in the real world. Especially cheap, mass-produced weapons.
That said, it is reasonable to be concerned that a small group or individual could buy a cheap robot army when they would otherwise not be able to put together a human force. These “Walmart” warlords could be a real threat in the future, although small groups and individuals can already do significant damage with existing technology, such as homemade bombs. They can also easily create weaponized versions of non-combat technology, such as civilian drones and autonomous cars. Even if robotic weapons are not manufactured, enterprising terrorists and warlords can build their own. Think, for example, of a self-driving car equipped with machine guns or loaded with explosives.
A reasonable reply is that the warlords, terrorists and dictators would have a harder time without cheap, off the shelf robotic weapons. This, it could be argued, would make the proposed ban on autonomous weapons worthwhile on utilitarian grounds: it would result in less deaths and less oppression.
The authors then claim that just as chemists and biologists are generally not in favor of creating chemical or biological weapons, most researchers in AI do not want to design AI weapons. They do argue that the creation of AI weapons could create a backlash against AI in general, which has the potential to do considerable good (although there are those who are convinced that even non-weapon AIs will wipe out humanity).
The authors do have a reasonable point here. Members of the public can panic over technology in ways that can impede the public good. One example is vaccines and the anti-vaccination movement. Another example is the panic over GMOs that is having some negative impact on the development of improved crops. But, as these two examples show, backlash against technology is not limited to weapons, so the AI backlash could arise from any AI technology and for no rational reason. A movement might arise, for example, against autonomous cars. Interestingly, military use of technology seems to rarely create backlash from the public. People do not refuse to fly in planes because the military uses them to kill people. Most people also love GPS, which was developed for military use.
The authors note that chemists, biologists and physicists have supported bans on weapons in their fields. This might be aimed at attempting to establish an analogy between AI researchers and other researchers, perhaps to try to show these researchers that it is a common practice to be in favor of bans against weapons in one’s area of study. Or, as some have suggested, the letter might be making an analogy between autonomous weapons and weapons of mass destruction (biological, chemical and nuclear weapons).
One clear problem with the analogy is that biological, chemical and nuclear weapons tend to be the opposite of robotic smart weapons: they “target” everyone without any discrimination. Nerve gas, for example, injures or kills everyone. A nuclear bomb also kills or wounds everyone in the area of effect. While AI weapons could carry nuclear, biological or chemical payloads and they could be set to kill everyone, this lack of discrimination and WMD nature is not inherent to autonomous weapons. In contrast, most proposed autonomous weapons seem intended to be precise and discriminating in their killing. After all, if the goal is mass destruction, there is already the well-established arsenal of biological, chemical and nuclear weapons. Terrorists, warlords and dictators often have no problems using WMDs already and AI weapons would not seem to significantly increase their capabilities.
In my next essay on this subject, I will argue in favor of AI weapons.
