Back on July 28, 2015 the Future of Life Institute released an open letter expressing opposition to the development of autonomous weapons. As of this writing, you can still sign it. Although the name of the organization sounds like one I would use as a cover for an evil, world-ending cult in my Call of Cthulhu campaign, I assume this group is sincere in its professed values. While I do respect their position, I believe they are mistaken. I will assess and reply to the arguments in the letter.
As the letter notes, an autonomous weapon can select and engage targets without human intervention. A science fiction example of such a weapon is the claw of Philip K. Dick’s classic “Second Variety.” A real world example
, albeit a stupid one, is the land mine: they are placed and engage automatically.
The first main argument presented in the letter is a proliferation argument. If a major power pushes AI development, the other powers will also do so, creating an arms race. This will lead to the development of cheap, easy to mass-produce AI weapons. These weapons, it is claimed, will end up being acquired by terrorists, warlords, and dictators. These people will use these weapons for assassinations, destabilization, oppression and ethnic cleansing. That is, for what these people already use existing weapons to do. This raises concern about whether autonomous weapons would have a significant impact.
The authors of the letter have a reasonable point: as science fiction stories have long pointed out, killer robots tend to obey orders and they can (in fiction) be extremely effective. However, history has shown that terrorists, warlords, and dictators rarely have trouble finding humans who are willing to commit evil. Humans are also quite good at doing evil and although killer robots are awesomely competent in fiction, it remains to be seen if they will be better than humans in the real world. Especially cheap, mass-produced weapons.
That said, it is reasonable to be concerned that a small group or individual could buy a cheap robot army when they would otherwise not be able to put together a human force. These “Walmart” warlords could be a real threat in the future, although small groups and individuals can already do significant damage with existing technology, such as homemade bombs. They can also easily create weaponized versions of non-combat technology, such as civilian drones and autonomous cars. Even if robotic weapons are not manufactured, enterprising terrorists and warlords can build their own. Think, for example, of a self-driving car equipped with machine guns or loaded with explosives.
A reasonable reply is that the warlords, terrorists and dictators would have a harder time without cheap, off the shelf robotic weapons. This, it could be argued, would make the proposed ban on autonomous weapons worthwhile on utilitarian grounds: it would result in less deaths and less oppression.
The authors then claim that just as chemists and biologists are generally not in favor of creating chemical or biological weapons, most researchers in AI do not want to design AI weapons. They do argue that the creation of AI weapons could create a backlash against AI in general, which has the potential to do considerable good (although there are those who are convinced that even non-weapon AIs will wipe out humanity).
The authors do have a reasonable point here. Members of the public can panic over technology in ways that can impede the public good. One example is vaccines and the anti-vaccination movement. Another example is the panic over GMOs that is having some negative impact on the development of improved crops. But, as these two examples show, backlash against technology is not limited to weapons, so the AI backlash could arise from any AI technology and for no rational reason. A movement might arise, for example, against autonomous cars. Interestingly, military use of technology seems to rarely create backlash from the public. People do not refuse to fly in planes because the military uses them to kill people. Most people also love GPS, which was developed for military use.
The authors note that chemists, biologists and physicists have supported bans on weapons in their fields. This might be aimed at attempting to establish an analogy between AI researchers and other researchers, perhaps to try to show these researchers that it is a common practice to be in favor of bans against weapons in one’s area of study. Or, as some have suggested, the letter might be making an analogy between autonomous weapons and weapons of mass destruction (biological, chemical and nuclear weapons).
One clear problem with the analogy is that biological, chemical and nuclear weapons tend to be the opposite of robotic smart weapons: they “target” everyone without any discrimination. Nerve gas, for example, injures or kills everyone. A nuclear bomb also kills or wounds everyone in the area of effect. While AI weapons could carry nuclear, biological or chemical payloads and they could be set to kill everyone, this lack of discrimination and WMD nature is not inherent to autonomous weapons. In contrast, most proposed autonomous weapons seem intended to be precise and discriminating in their killing. After all, if the goal is mass destruction, there is already the well-established arsenal of biological, chemical and nuclear weapons. Terrorists, warlords and dictators often have no problems using WMDs already and AI weapons would not seem to significantly increase their capabilities.
In my next essay on this subject, I will argue in favor of AI weapons.


Donald gazed down upon the gleaming city of Newer York and the equally gleaming citizens that walked, rolled, or flew its gleaming streets. Long ago, or so the oldest files in his memory indicated, he had been an organic human. That human, whom Donald regarded as himself, had also gazed down upon the city, then known as New York. In those dark days, primates walked and drove the dirty streets and the only things that gleamed were puddles of urine.
His treads ripping into the living earth, Striker 115 rushed to engage the human operated tanks. The few remaining human soldiers had foolishly, yet bravely (as Striker 115 was forced to admit) refused to accept quick and painless processing.
In philosophy, a classic moral debate is on the conflict between liberty and security. While this covers many issues, the main problem is determining the extent to which liberty should be sacrificed to gain security. There is also the practical question of whether the security gain is effective.
An obvious consequence of technological advance is the automation of certain jobs. In the past, these jobs tended to be mechanical and repetitive: the sort of tasks that could be reduced to basic rules. A good example of this is the replacement of automobile assembly line jobs with robots. Not surprisingly, it has been claimed that certain jobs will always require humans because these jobs simply cannot be automated. Also not surprisingly, the number of jobs that “simply cannot be automated” shrinks with each advance in technology.
In his book Naked Sun, Isaac Asimov creates the world of Solaria. What distinguishes this world from other human inhabited planets is that it has a strictly regulated population of 20,000 humans and 10,000 robots for each human. What is perhaps the strangest feature of this world is a reversal of what many consider a basic human need: the humans of Solaria are trained to despise in-person contact with other humans, though interaction with human-like robots is acceptable. Each human lives on a huge estate, though some live “with” a spouse. When the Solarians need to communicate, they make use of a holographic telepresence system. Interestingly, they have even developed terminology to distinguish between communicating in person (called “seeing”) and communication via telepresence (“viewing”). For some Solarians the fear of encountering another human in person is so strong that they would rather commit suicide than endure such contact.
Thanks to improvements in medicine humans are living longer and can be kept alive beyond when they would naturally die. On the plus side, longer life is generally good. On the downside, this longer lifespan and medical intervention mean that people will often need extensive care in their old age that can be a burden on caregivers. Not surprisingly, there has been an effort to solve this problem with companion robots.
My friend Ron claims that I do not drive. This is not true. I drive. But I dive as little as possible. Part of it is me being frugal. I don’t want to spend more than I need on gas and maintenance. But most of it is that I hate to drive. Some of this is driving time is mostly wasted time and I would rather be doing something else. Some of it is that I find driving an awful blend of boredom and stress. The stress is because driving creates a risk of harming other people and causing property damage, so I am as hypervigilant driving as I am when target shooting at the range. If I am distracted or act rashly, I could kill someone by accident. Or they could kill me. As such, I am completely in favor of effective driverless cars. That said, it is certainly worth considering the implications of their widespread adoption. The first version of this essay appeared back in 2015 and certain people have been promising that driverless cars are just around the corner. The corner remains far away.
While Aristotle was writing centuries before wearables, his view of moral education provides a foundation for the theory behind the benign tyranny of the device. Or, if one prefers, the bearable tyranny of the wearable.