On July 28, 2015 the Future of Life Institute released an open letter expressing opposition to the development of autonomous weapons. Although the name of the organization sounds like one I would use as a cover for an evil, world-ending cult in a Call of Cthulhu campaign, I am willing to accept that this group is sincere in its professed values. While I do respect their position on the issue, I believe that they are mistaken. I will assess and reply to the arguments in the letter.
As the letter notes, an autonomous weapon is capable of selecting and engaging targets without human intervention. An excellent science fiction example of such a weapon is the claw of Philip K. Dick’s classic “Second Variety” (a must read for anyone interested in the robopocalypse). A real world example of such a weapon, albeit a stupid one, is the land mine—they are placed and then engage automatically.
The first main argument presented in the letter is essentially a proliferation argument. If a major power pushes AI development, the other powers will also do so, creating an arms race. This will lead to the development of cheap, easy to mass-produce AI weapons. These weapons, it is claimed, will end up being acquired by terrorists, warlords, and dictators. These evil people will use these weapons for assassinations, destabilization, oppression and ethnic cleansing. That is, for what these evil people already use existing weapons to do quite effectively. This raises the obvious concern about whether or not autonomous weapons would actually have a significant impact in these areas.
The authors of the letter do have a reasonable point: as science fiction stories have long pointed out, killer robots tend to simply obey orders and they can (at least in fiction) be extremely effective. However, history has shown that terrorists, warlords, and dictators rarely have trouble finding humans who are willing to commit acts of incredible evil. Humans are also quite good at these sort of things and although killer robots are awesomely competent in fiction, it remains to be seen if they will be better than humans in the real world. Especially the cheap, mass produced weapons in question.
That said, it is reasonable to be concerned that a small group or individual could buy a cheap robot army when they would otherwise not be able to put together a human force. These “Walmart” warlords could be a real threat in the future—although small groups and individuals can already do considerable damage with existing technology, such as homemade bombs. They can also easily create weaponized versions of non-combat technology, such as civilian drones and autonomous cars—so even if robotic weapons are not manufactured, enterprising terrorists and warlords will build their own. Think, for example, of a self-driving car equipped with machine guns or just loaded with explosives.
A reasonable reply is that the warlords, terrorists and dictators would have a harder time of it without cheap, off the shelf robotic weapons. This, it could be argued, would make the proposed ban on autonomous weapons worthwhile on utilitarian grounds: it would result in less deaths and less oppression.
The authors then claim that just as chemists and biologists are generally not in favor of creating chemical or biological weapons, most researchers in AI do not want to design AI weapons. They do argue that the creation of AI weapons could create a backlash against AI in general, which has the potential to do considerable good (although there are those who are convinced that even non-weapon AIs will wipe out humanity).
The authors do have a reasonable point here—members of the public do often panic over technology in ways that can impede the public good. One example is in regards to vaccines and the anti-vaccination movement. Another example is the panic over GMOs that is having some negative impact on the development of improved crops. But, as these two examples show, backlash against technology is not limited to weapons, so the AI backlash could arise from any AI technology and for no rational reason. A movement might arise, for example, against autonomous cars. Interestingly, military use of technology seems to rarely create backlash from the public—people do not refuse to fly in planes because the military uses them to kill people. Most people also love GPS, which was developed for military use.
The authors note that chemists, biologists and physicists have supported bans on weapons in their fields. This might be aimed at attempting to establish an analogy between AI researchers and other researchers, perhaps to try to show these researchers that it is a common practice to be in favor of bans against weapons in one’s area of study. Or, as some have suggested, the letter might be making an analogy between autonomous weapons and weapons of mass destruction (biological, chemical and nuclear weapons).
One clear problem with the analogy is that biological, chemical and nuclear weapons tend to be the opposite of robotic smart weapons: they “target” everyone without any discrimination. Nerve gas, for example, injures or kills everyone. A nuclear bomb also kills or wounds everyone in the area of effect. While AI weapons could carry nuclear, biological or chemical payloads and they could be set to simply kill everyone, this lack of discrimination and WMD nature is not inherent to autonomous weapons. In contrast, most proposed autonomous weapons seem intended to be very precise and discriminating in their killing. After all, if the goal is mass destruction, there is already the well-established arsenal of biological, chemical and nuclear weapons. Terrorists, warlords and dictators often have no problems using WMDs already and AI weapons would not seem to significantly increase their capabilities.
In my next essay on this subject, I will argue in favor of AI weapons.
Your point about skeptics of GMO crops and vaccines seems to be an unnecessary, extraneous, begging-the-question, ad hominem jab at those of us who dare question the dominant paradigm held by the powers-that-be who have officially decreed that any questioning of the aforementioned theories is academic heresy.
Heh…the worm turns.
Not at all. My point is a very modest one. Granting that some vaccines are beneficial and that there is some beneficial GMO development (such as that being conducted by universities to develop crops for third world farming), the backlash against these does some harm. These serve as examples which support the claim of the letter writers, namely that misuse of technology can create a harmful panic.
I’ve written a full article on why being anti-vaccine can be rational and one that discusses legitimate concerns about GMOs However, the evidence seems solid that vaccines are generally very safe (and certainly are better than the diseases they protect us against) and that GMOs are also very safe.
Autonomous weapons put to use by nation/states will be one issue but what about their employment by criminal elements? A robot enters a bank or other business with a lot of cash on hand and demands the money with the threat of blowing the place up. After getting it, the robot goes outside and delivers it to a drone that then disappears. There are lots of other opportunities for extortion, kidnapping, etc. as well. Aren’t drones already being used, although not autonomously, to deliver drugs and weapons to prisons? There’s also the strong possibility that autonomous weapons could be used by rebels against their own government, leading to even more repression by the authorities. It’s hard to be optimistic about the future in light of these developments.
The problem lies not in our weapons but in ourselves. If we continue to look the other way when confronted with violence and threats, we will foster a world of misery. The advancement of weapons technology parallels othe technical advancements. Civilization possesses far more advanced technology than at any time in the past, yet one need not go far back in time, nor even to very far distant places in the present, to find environments in which life is cruel, brutish, and short. Yet the societies with the more advanced technologies are the safest, healthiest, and wealthiest relative to the less advanced ones. Human civilization is far more complex than humans can perceive, but one thing remains consistent. Societies that are productive, polite, and vigilant endure.
True, committing crimes in future times is a matter of concern: http://blog.talkingphilosophy.com/?p=8501