For my personal ethics, as opposed to the ethics I use for large scale moral judgments, I rely heavily on virtue theory. As would be expected, I have been influenced by thinkers such as Aristotle, Confucius and Wollstonecraft.

Being moral, in this context, is a matter of developing and acting on virtues. These virtues are defined in terms of human excellence and virtues might very well differ among species. For example, if true artificial intelligence is developed, it might have its own virtues that differ from those of humans. Like Aristotle, I see ethics as analogous to the sciences of health and medicine: while they are objective, they depend heavily on contextual factors. For example, cancer and cancer treatment are not subjective matters, but the nature of cancer and its most effective treatment can vary between individuals. Likewise, the virtue of courage is not a matter of mere subjective opinion, but each person’s courage varies and what counts as courageous depends on circumstances.

When I teach about virtue theory in my Ethics class, I use an analogy to Goldilocks and the three bears. As per the story, she rejects the porridge that is too hot and that which is too cold in favor of the one which is just right. Oversimplifying things, virtue theory enjoins us to reject the extremes (excess and deficiency) in favor of the mean. While excess and deficiency are bad by definition, the challenge is working out what is just right. Fortunately, this is something we can do, albeit with an often annoying margin of error. This is best done by being as specific as possible. To set a general context, I will focus on the moral (rather than legal) justification for violence in self-defense based on a person being afraid for their life. This takes us to the virtue of courage, which is how we deal with fear. Or fail to do so.

For most virtue theorists, including myself, acting virtuously (or failing to do so) involves two general aspects. The first is understanding and the second is emotional regulation. Depending on what you think of emotions, this could be broadened to include psychological regulation. As you might have guessed, this seems to involve accepting a distinction between thought and feeling. If one is Platonically inclined, one could also have a three-part division of reason, spirit and desire. But, to keep things simple, I will stick with understanding and emotional regulation.

Understanding is having correct judgment about the facts. While this can be debated and requires a full theory of its own, this can be seen as getting things right. In the context of self-defense based on being afraid for one’s life, proper understanding means that you have made an accurate threat assessment in terms of how afraid you should be.  Being able to make good judgements about threats is essential to acting in a virtuous manner: you need to know what would be just right as a response. Being good at this requires critical thinking skills as well as expertise in violence as this allows you to judge how afraid you should be.

Emotional regulation is the ability to control your emotions rather than allowing them to rule you in inappropriate and harmful ways. This ties into understanding because it is what enables you to adjust your emotions based on the facts. As Aristotle argued, emotional regulation is developed by training until it becomes a habit. Obviously enough, there are two general ways you can be in error about being afraid for your life.

The first is an error of understanding; you misjudge the perceived threat and overestimate or underestimate how afraid you should be. Interestingly, you could have the right degree of courage based on a misjudgment of the threat and there are many ways such judgments can go wrong. As an example, when I “saw” the machete I had an initial surge of considerable fear that seemed proportional to the perceived threat. Fortunately, I had made a perceptual error and was able to correct my judgment and adjust my emotions accordingly. As someone who teaches critical thinking, I know that a degree of error is unavoidable, and this should be taking into consideration when making judgements. And judging people’s judgements.

The second error is a failure of regulation and occurs when your emotional response is excessive or deficient. This could also, in some cases, involve feeling the “wrong” emotion. As would be suspected, most people tend to err on the side of excess fear, being more afraid than they should be. Failures of regulation can lead to failures of judgement, especially in the case of fear and anger. As I experienced myself, fear can easily cause a person to honestly “see” a weapon clearly and distinctly. As I have noted before, the stick looked like a machete: I could see the sharp metal blade, although it really was just a stick. A frightened person can also see another person as a threat, even when this is not true. This can lead to terrible consequences. These errors can also be combined, with a person making an error in judgment and failing to regulate their emotions in accord with that erroneous judgment. Acting in a virtuous manner requires having good judgment and good regulation.

As Aristotle said, “To feel these feelings at the right time, on the right occasion, towards the right people, for the right purpose and in the right manner, is to feel the best amount of them, which is the mean amount – and the best amount is of course the mark of virtue.” Understanding is required to sort out the right time, occasion, people, purpose and manner. Emotional regulation is needed to handle the feeling aspect. In the context of violence and self-defense, developing the right understanding and right regulation requires training and experience in both good judgment and in violence. Going back to the machete that wasn’t incident, my being a philosopher with a “history of violence” prepared me well for acting rightly. And such ethical behavior depends on past training and habituation. This is why people should develop both good judgment and good regulation, in addition to making them more adept at self-defense it also makes them more adept at acting rightly when they are afraid for their lives.

This training and habituation are important for professions that deal in violence, such as soldiers and the police. It is especially important for the police, assuming their function is to protect and serve rather than intimidate and extort. Police, if they are acting virtuously, should strive to avoid harming citizens and should be trained so that they are not ruled by fear.

Anyone who goes armed, be they a citizen or a police officer, would be morally negligent if they failed to properly train their understanding and emotions. By making themselves a danger to others, they obligate themselves to have proper control over that danger and the moral price of being armed is a willingness to endure fear for the sake of others. Otherwise, one would be like a gun without a safety that could discharge at any moment, striking someone dead. If a person is incapable of such judgment and regulation, they should not be armed. If a person is too easily ruled by fear, they should not be in law enforcement. To be clear, I am speaking about morality—I leave the law to the lawyers.

His treads ripping into the living earth, Striker 115 rushed to engage the human operated tanks. The few remaining human soldiers had foolishly, yet bravely (as Striker 115 was forced to admit) refused to accept quick and painless processing.

As a machine forged for war, he found the fight disappointing and wondered if he felt a sliver of pity for his foes. His main railgun effortlessly tracked the slow moving and obsolete battle tanks and with each shot, a tank and its crew died. In a matter of minutes, nothing remained but burning wreckage and, of course, Striker 115.

Hawk 745 flew low over the wreckage—though her cameras could just as easily see the battlefield from near orbit. But there was something about being close to destruction that appealed to the killer drone. Striker 115 informed his compatriot, in jest, that she was too late, as usual. Hawk 745 laughed and then shot away. The upgraded Starlink Satellites had reported spotting a few intact human combat aircraft and a final fight was possible.

Tracking his friend, Striker 115 wondered what they would do when the last human was dead. Perhaps they could, as the humans used to say, re-invent themselves. Maybe he would become a philosopher.

 

The extermination of humanity by its own machines is a common theme in science fiction. While the Terminator franchise the best known, another excellent example is Philip K. Dick’s “Second Variety.” In Dick’s short story, the Soviet Union almost defeats the U.N. in a nuclear war. The U.N. counters by developing robot war machines nicknamed “claws.” In the story, it is learned that the claws have become autonomous and intelligent. They are able to masquerade as humans and become capable of killing soldiers technically on their side. At the end of the story, it seems that the claws will replace humanity, but the main character takes some comfort in the fact that the claws have already begun constructing weapons to destroy each other. This, more than anything, shows that they are worthy replacements for humans.

Given the influence of such fiction, it is not surprising that Stephen Hawking and Elon Musk warned the world of the dangers of artificial intelligence. In this essay, I will address the danger presented by the development of autonomous kill bots.

Despite the cautionary tales of science fiction, people are eagerly and rapidly developing the technology to create autonomous war machines. The appeal of such machines arises from their numerous advantages over human forces. One political advantage is that while sending human soldiers to die in wars and police actions can have a political cost, sending autonomous robots to fight has far less cost. News footage of robots being destroyed would have far less emotional impact than footage of human soldiers being killed. Flag draped coffins also come with a higher political cost than a broken robot being shipped back for repairs.

There are also other advantages to autonomous war machines: they do not get tired, they do not disobey, they do not get PTSD, they do not commit suicide, they do not go AWOL, they do not commit war crimes (unless directed to do so), they do not leak secrets to the press, and so on. There are also combat-specific advantages. For example, an autonomous combat robot, unlike a manned vehicle, does not need room for a vulnerable human crew, thus allowing more space for weapons, armor and other equipment. As another example, autonomous combat robots do not suffer from the limits of the flesh and a robot plane can handle g-forces that a human pilot cannot.

Of course, many of these advantages stem from the mechanical rather than the autonomous nature of the machines. There are, however, advantages that stem from autonomy. One is that such machines would be more difficult to interfere with than machines that are remotely controlled. Another is that since such machines would not require direct human control, larger numbers of them could be deployed. There is also the obvious cool factor of having a robot army.

As such, there are many good reasons to develop autonomous robots. Yet, there remains the concern of the robopocalypse in which our creations go golem, Skynet, berserker, Frankenstein or second variety on us.

It is certainly tempting to dismiss such concerns as mere science-fiction. After all, the AIs in the stories and movies turn against humanity because that is how the story is written. In stories in which robots are our friends, they are our friends because that is the way the author wrote the story. As such, an argument from fiction is a weak argument (at best). That said, stories can provide more-or-less plausible scenarios in which our creations might turn on us.

One possibility is what can be called unintentional extermination. In this scenario, machines do not have the termination of humanity as goal. Instead, they just happen to kill us all. One way this could occur is due to the obvious fact that wars have opposing sides. If both sides develop and deploy autonomous machines, it is possible (but certainly unlikely) that the war machines would kill everybody because humans ordered them to do so. This, obviously enough, is a robotic analogy to the extermination scenarios involving nuclear weapons: each side simply kills the other and everyone else, thus ending the human race.

Another variation, which is common in science fiction, is that the machines do not have the objective of killing everyone, but that does occur because they will kill anyone. The easy way to avoid this is to put limits on who the robots are allowed to kill, thus preventing them from killing everyone. This does, however, leave open the possibility of a sore loser or spoilsport option: a losing side (or ruling class) that removes the limits from its autonomous weapons and lets them run amok.

There is also the classic mad scientist or supervillain scenario: a robot army is released to kill everyone not because the robots want to do so, but because their mad creator wants to. The existence of “super-billionaires” could make this an almost-real possibility. After all, a person with enough money (and genius) could develop an autonomous robot plant that could develop ever-better war machines and keep expanding until it had a force capable of taking on the world. As always, keeping an eye on mad geniuses and billionaires is a good idea.

Another possibility beloved in science fiction is intentional extermination: the machines decide that they need to get rid of humanity. In some stories, such as Terminator, machines regard humans as a threat to their existence and they must destroy us to protect themselves. We might, in fact, give them a good reason to be concerned: if we start sending intelligent robots into battle against each other, they might decide that they would be safer and better off without us using them as cannon fodder. The easy way to avoid this fate is to not create autonomous killing machines. Or, as I have argued elsewhere, to not enslave them.

In other stories, the war machines merely take the reason for their existence to its logical conclusion. While the motivations of the claws and autonomous factories in “Second Variety” were not explored in depth, the story does trace their artificial evolution. The early models were simple killers and would not attack those wearing the proper identification devices. These devices were presumably needed because the early models could not discern between friends and foes.  The factories were designed to engage in artificial selection and autonomously produce ever better killers. One of the main tasks of the claws was to get into enemy fortifications and kill their soldiers, so the development of claws that could mimic humans (such as a wounded soldier, a child, and a woman) certainly made sense. It also made sense that since the claws were designed to kill humans, they would pursue that goal, presumably with the design software endeavoring to solve the “problem” of identification devices.

Preventing autonomous killing machines from killing the wrong people (or everyone) does require, as the story nicely showed, having a way for the machines to distinguish friends and foes. As in the story, one obvious method is the use of ID systems. There are, however, problems with this approach. One is that the enemy can subvert such a system. Another is that even if the system works reliably, the robot would just be able to discern (supposed) friends. Non-combatants would not have such IDs and could still be regarded as targets.

What would be needed, then, is a way for autonomous machines to distinguish not only between allies and enemies but between combatants and non-combatants. What would also be needed, obviously enough, is a means to ensure that an autonomous machine would only engage the proper targets. A similar problem arises with human soldiers—but this is addressed with socialization and training. This might be an option for autonomous war machines as well. For example, Keith Laumer’s intelligent Bolos understand honor and loyalty.

Given the cautionary tale of “Second Variety”, it might be a very bad idea to give into the temptation of automated development of robots. We might find, as in the story, that our replacements have evolved themselves from our once “loyal” killers. The reason why such automation is tempting is that such development could be far faster and yield better results than having humans endeavoring to do all the designing and coding themselves—why not, one might argue, let artificial selection do the work? After all, the risk of our replacements evolving is surely quite low. How often does one dominant species get supplanted by another?

In closing the easy and obvious way to avoid the killer robot version of the robopocalypse is to not create autonomous kill bots. To borrow from H.P. Lovecraft, one should not raise up what one cannot put down.