In my previous essay I laid the groundwork for the discussion that is to follow about the anti-abortion  moral position and misogyny. As argued in that essay, a person can be anti-abortion and not a misogynist. It was also shown that attacking a person’s circumstances or consistency in regard to their professed belief in an anti-abortion  moral position does not disprove that position. It was, however, contended that consistency does matter when sorting out whether a person really does hold to an anti-abortion  position or is, in fact, using that as cover for misogyny.

Before Donald Trump, being openly misogynistic was generally a way to lose an election. As such, a clever (or cleverly managed) misogynist will endeavor to conceal his misogyny behind more laudable moral positions, such as professing to being pro-life. This, obviously, sells better than being anti-women.

 

Republicans typically profess that they are pro-life , but there is the question of whether they truly hold to this principle. Republicans are also regularly accused of being misogynists and part of this involves asserting that their anti-abortion stance is an anti-women stance. One way to sort this out is to consider whether a person acts consistently with their professed pro-life but not anti-women position. Since people are inconsistent though ignorance and moral weakness, this will not conclusively reveal the truth of the matter—but it is perhaps the best method of empirical investigation.

On the face of it, a pro-life position is the view that it is morally wrong to kill. If a person held to this principle consistently, then they would oppose all forms of killing and this would include hunting, killing animals for food, capital punishment, and killing in war. There are people who do hold to this view and are thus consistent. This view was taken very seriously by Christian thinkers such as St. Augustine and St. Aquinas. After all, as I say to my Ethics students, it would be a hell of a thing to go to Hell for eating a hamburger.

The pro-life  view that killing is wrong would seem to require a great deal of a person. In addition to being against just straight-up killing in war, abortion and capital punishment, it would also seem to require being against other things that kill people, such as poverty, pollution and disease. As such, a pro-life person would seem to be required to favor medical and social aid to fight things like disease and poverty that kill people.

As is obvious, there are many who profess being pro-life while opposing things that would reduce deaths. They even oppose such things as providing food support for mothers and infants who are mired in poverty. One might thus suspect that they are not so much pro-life as anti-woman. Of course, a person could be anti-abortion and still be opposed to society rendering aid to people to prevent death.

One option is to be against killing but be fine with letting people die. While philosophers do make this moral distinction, it seems a bit problematic for a person to claim that he opposes abortion because killing fetuses is wrong, but not providing aid and support to teenage mothers, the sick, and the starving is acceptable because one is just letting them die rather than killing them. Given this view, a “pro-life” person of this sort would be okay with a mother just abandoning her baby—she would simply be letting the baby die rather than killing her.

People who profess to be pro-life also often are morally onboard with killing and eating animals. The ethics of killing animals (and plants) was also addressed explicitly by Augustine and Aquinas. One way to be pro-life but hold that killing animals is acceptable is to contend that humans have a special moral status that other living things lack. The usual justification is that we are better than them, so we can kill (and eat) them. This view was held by St. Augustine and St. Anselm.

 However, embracing the superiority principle does provide an opening that can be used to justify abortion. One merely needs to argue that the fetus has a lower moral status than the woman and this would seem to warrant abortion.

Many people who profess a pro-life view also favor capital punishment and war. In fact, it is common to hear a politician smoothly switch from speaking of the sanctity of life to the need to kill terrorists and criminals. One way to be pro-life  and accept capital punishment and war is to argue that it is the killing of innocents that is wrong. Killing the non-innocent is fine.

The obvious problem is that capital punishment sometimes kills innocent people, and war always involves the death of innocents. If these killings are warranted in terms of interests, self-defense, or on utilitarian grounds, then the door is open for the same reasoning being applied to abortion. After all, if innocent adults and children can be killed for national security, economic interests or to protect us from terrorists, then fetuses can also be killed for the interests of the woman or on utilitarian grounds. Also, animals and plants are clearly innocent. Someone who is fine with killing people for the sake of interests or on utilitarian grounds yet professes to be devoutly pro-life might justifiably be suspected of being more anti-women than pro-life.

A professed pro-life position can also be interpreted as the moral principle that abortions should be prevented. This is, obviously, better described as anti-abortion rather than pro-life. One obvious way to prevent abortions is to prevent women from having them. This need not be a misogynistic view—one would need to consider why the person holds to this view and this can be explored by considering the person’s other expressed views on related matters.

If a person is anti-abortion, then she should presumably support ways to prevent abortion other than merely stopping women from having them. Two rather effective ways to reduce the number of abortions (and thus prevent some) are effective sex education and access to birth control. These significantly reduce the number of unwanted pregnancies and thus reduce the number of abortions. Not surprisingly, abstinence focused “sex education” fails dismally.

Being anti-abortion is rather like being anti-traffic fatality. Telling people to not drive will not really help. Teaching people how to drive safely and ensuring that protection is readily available does work quite well.

Because of this, if a person professes to be anti-abortion, yet is opposed to effective sex education and birth control, then it is reasonable to suspect misogyny. This is, of course, not conclusive: the person might have no dislike of women and sincerely believe that ignorance about sex is best, that abstinence works, and that birth control is evil. The person would not be a misogynist—just in error.

In closing, it must be reiterated that just because a person is inconsistent about their professed pro-life  moral principles, it does not follow that they must be a misogynist. After all, people are often inconsistent because of ignorance, because they fail to consider implications, and from moral weakness. However, if a person professes a pro-life  position, yet is consistently inconsistent in regards to their actions and other professed views, then it would not be unreasonable to consider that there might be some misogyny in play.

During ethical discussions about abortion, I am sometimes asked if I believe that person who holds the anti-abortion position must be a misogynist. While there are misogynists who are anti-abortion, I hold to the obvious: there is no necessary connection between being anti-abortion and being a misogynist. A misogynist hates women, while a person who holds an anti-abortion position believes that abortion is morally wrong. There is no inconsistency between holding the moral position that abortion is wrong and not being a hater of women. In fact, an anti-abortion person could have a benevolent view towards all living beings and be morally opposed to harming any of them, including zygotes and women.

While misogynists would tend to be anti-choice because of their hatred of women, they need not be anti-abortion . That is, hating women and wanting to deny them the choice to have an abortion does not entail that a person believes that abortion is morally wrong. For example, a misogynist could be fine with abortion (such as when it is convenient to him) but think that it should be up to the man to decide if or when a pregnancy is terminated. A misogynist might even be pro-choice for various reasons; but almost certainly not because he is a proponent of the rights of women.  As such, there is no necessary connection between the two views.

There is also the question of whether a pro-choice position is a cover for misogyny. The easy and obvious answer is that sometimes it is and sometimes it is not. Since it has been established that a person can be anti-abortion without being a misogynist, it follows that being anti-abortion need not be a cover for misogyny. However, it can provide cover for such a position. It is easier to sell the idea of restricting abortion by making a moral case against it than by expressing hatred of women and a desire to restrict their choices and reproductive option. Before progressing with the discussion, it is important to address two points.

The first point is that even if it is established that an anti-abortion person is a misogynist, this does not entail that the person’s position on the issue of abortion is mistaken. To reject a misogynist’s claims or arguments regarding abortion (or anything) on the grounds that they are a misogynist is to commit a circumstantial ad hominem.

This sort of Circumstantial ad Hominem involves substituting an attack on a person’s circumstances (such as the person’s religion, political affiliation, ethnic background, etc.) for reasons against her claim. This version has the following form:

 

Premise 1. Person A makes claim X.

Premise 2. Person B makes an attack on A’s circumstances.

Conclusion. Therefore X is false.

 

A Circumstantial ad Hominem is a fallacy because a person’s circumstances (religion, political affiliation, etc.) do not affect the truth or falsity of the claim. This is clear from following example: “Bill claims that 1+1 =2. But he is a Republican, so his claim is false.” As such, to assert that the anti-abortion position is in error because some misogynist holds that view would be an error in reasoning.

A second important point is that a person’s consistency or lack in terms of their principles or actions has no relevance to the truth of their claims or the strength of their arguments. To think otherwise is to fall victim to the ad hominem tu quoque fallacy. This fallacy is committed when it is concluded that a person’s claim is false because 1) it is inconsistent with something else a person has said or 2) what a person says is inconsistent with her actions. This type of “argument” has the following form:

 

Premise 1. Person A makes claim X.

Premise 2. Person B asserts that A’s actions or past claims are inconsistent with the truth of claim X.

Conclusion. Therefore, X is false.

 

The fact that a person makes inconsistent claims does not make any specific claim they make false (although of any pair of inconsistent claims only one can be true while both can be false). Also, the fact that a person’s claims are not consistent with their actions might indicate that the person is a hypocrite, but this does not prove their claims are false.

A person’s inconsistency also does not show that the person does not believe their avowed principle as they might be ignorant of its implications. That said, such inconsistency could be evidence of hypocrisy. While sorting out a person’s actual principles is not relevant to logical assessment of the person’s claims, doing so is relevant to many types of decision making regarding the person. One area where sorting out a person’s principles matters is voting. In the next essay, this matter will be addressed.

Although I like science fiction, it took me a long time to get around to seeing Interstellar—although time is a subjective sort of thing. One reason I decided to see it is because some claimed the movie should be shown in science classes. Because of this, I expected to see a science fiction movie. Since I write science fiction, horror and fantasy stuff, it should not be surprising that I get a bit obsessive about genre classifications. Since I am a professor, it should also not be surprising that I have an interest in teaching methods. As such, I will be considering Interstellar in regard to both genre classifications and its education value in the context of science. There will be spoilers—so if you have not seen it, you might wish to hold off reading this essay.

While there have been many attempts to distinguish between science and fantasy, Roger Zelazny presents one of the most brilliant and concise accounts in a dialogue between Yama and Tak in Lord of Light. Tak asks Yama about whether a creature, a Rakshasa, he has seen is a demon or not. Yama responds by saying, “If by ‘demon’ you mean a malefic, supernatural creature, possessed of great powers, life span and the ability to temporarily assume any shape — then the answer is no.  This is the generally accepted definition, but it is untrue in one respect. … It is not a supernatural creature.”

Tak, not surprisingly, does not see the importance of this single untruth in the definition. Yama replies with “Ah, but it makes a great deal of difference, you see.  It is the difference between the unknown and the unknowable, between science and fantasy — it is a matter of essence.  The four points of the compass be logic, knowledge, wisdom, and the unknown.  Some do bow in that final direction.  Others advance upon it.  To bow before the one is to lose sight of the three.  I may submit to the unknown, but never to the unknowable”

In Lord of Light, the Rakshasa play the role of demons, but they are the original inhabitants of a world conquered by human colonists. As such, they are natural creatures and fall under the domain of science. While I do not completely agree with Zelazny’s distinction, I find it appealing and reasonable enough to use as the foundation for the following discussion of the movie.

Interstellar initially stays within the realm of science-fiction by staying within the sphere of scientific speculation about hypersleep, wormholes and black holes. While the script does take some liberties with science, this is fine for the obvious reason that this is science fiction and not a science lecture. Interstellar also has the interesting bonus of having contributed to real science about the appearance of black holes. That aspect would provide some justification for showing it in a science class.

Another part of the movie that would be suitable for a science class are the scenes in which Murph thinks that her room might be haunted by a ghost. Cooper, her father, urges her to apply the scientific method to the phenomenon. Of course, it might be considered bad parenting for a parent to urge his child to study what might be a dangerous phenomenon in her room. Cooper also instantly dismisses the ghost hypothesis—which can be seen as being very scientific (since there has been no evidence of ghosts) to not very scientific (since this might be evidence of ghosts).

The story does include the point that the local school is denying that the moon-landings really occurred and the official textbooks support this view. Murph is punished at school for arguing that the moon landings did occur and is rewarded by Cooper. This does make a point about science denial and could thus be of use in the classroom. At least until the state decrees that the moon landings never happened.

Ironically, the story presents its own conspiracies and casts two of the main scientists (Brand and Mann) as liars. Brand lies about his failed equation for “good” reasons—to keep people working on a project that has a chance and to keep morale up. Mann lies about the habitability of his world because, despite being built up in the story as the best of the scientists, he cannot take the strain of being alone. As such, the movie sends a mixed message about conspiracies and lying scientists. While learning that some people are liars has value, this does not add to the movie’s value as a science class film. Now, to get back to science.

The science core of the movie, however, focuses on holes: the wormhole and the black hole. As noted above, the movie does stick within the realm of speculative science about the wormhole and the black hole—at least until near the end of the movie.

It turns out that all that is needed to fix Brand’s equation is data from inside a black hole. Conveniently, one is present. Also conveniently, Cooper and the cool robot TARS end up piloting their ships into the black hole as part of the plan to save Brand. It is at this point that the movie moves from science to fantasy.

Cooper and TARS manage to survive being dragged into the black hole, which might be scientifically fine. However, they are then rescued by the mysterious “they” (whoever created the wormhole and sent messages to NASA).

Cooper is transported into a tesseract or something. The way it works in the movie is that Cooper is floating “in” what seems to be a massive structure. In “reality” it is nifty blend of time and space—he can see and interact with all the temporal slices that occurred in Murph’s room. Crudely put, it allows him to move in time as if it were space. While it is also sort of still space. While this is rather weird, it is still within the realm of speculative science fiction.

Cooper is somehow able to interact with the room using weird movie plot rules—he can knock books off the shelves in a Morse code pattern, he can precisely change local gravity to provide the location of the NASA base in binary, and finally he can manipulate the hand of the watch he gave his daughter to convey the data needed to complete the equation. Weirdly, he cannot just manipulate a pen or pencil to write things out. But movies got to movie. While a bit absurd, this is still science fiction.

The main problem lies with the way Cooper solves the problem of locating Murph at the right time. While at this point, I would have bought the idea that he figured out the time scale of the room and could rapidly check it, the story has Cooper navigate through the vast time room using love as a “force” that can transcend time. While it is possible that Cooper is wrong about what he is really doing, the movie certainly presents it as if this love force is what serves as his temporal positioning system.

While love is a great thing, there are not even remotely scientific theories that provide a foundation for love having the qualities needed to enable such temporal navigation. There is, of course, scientific research into love and other emotions. The best of current love science indicates that love is a “mechanical” phenomena (in the philosophical sense) and there is nothing to even suggest that it provides what amounts to supernatural abilities.

It would, of course, be fine to have Cooper keep on trying because he loves his children—love does that. But making love into some sort of trans-dimensional force is clearly supernatural fantasy rather than science and certainly not suitable for a science lesson (well, other than to show what is not science).

One last concern I have with using the movie in a science class is the use of super beings. While the audience learns little of the beings, the movie indicates they can manipulate time and space. They create the wormhole, they pull Cooper and TARS from a black hole, they send Cooper back in time and enable him to communicate in stupid ways, and so on. The movie also tells the audience the beings are probably future humans (or what humanity becomes) and that they can “see” all of time. While the movie does not mention this, this is how St. Augustine saw God: He is outside of time. They are also benign and demonstrate they care about some individuals but not others. While they save Cooper and TARS, they also let many people die.

Given these qualities, it is easy to see these beings (or being) as playing the role of God or even being gods: super powerful, sometimes benign beings, that have incredible power over time and space. Yet they are fine with letting lots of people die needlessly while miraculously saving a person or two. For reasons.

Given the wormhole, it is easy to compare this movie to Star Trek: Deep Space Nine. This show had a wormhole populated by powerful beings that existed outside of our normal dimensions. To the people of Bajor, these beings were divine and supernatural Prophets. To Star Fleet, they were the wormhole aliens. While Star Trek is supposed to be science fiction, some episodes involving the prophets did blur the lines into fantasy, perhaps intentionally.

Getting back to Interstellar, it could be argued that the mysterious “they” are like the Rakshasa of Lord of Light: in that they (or whatever) have many of the attributes of God or gods but are not supernatural beings. Being fiction, this could be set by fiat, but this does raise the boundary question. To be specific, does saying that something that has what appear to be the usual supernatural powers is not supernatural make it science-fiction rather than fantasy? Answering this requires working out a proper theory of the boundary, which goes beyond the scope of this essay. However, I will note that having the day saved by the intervention of mysterious and almost divinely powerful beings does not seem to make the movie suitable for a science class. Rather, it makes it seem to be more of a fantasy story masquerading as science fiction.

My overall view is that showing parts of Interstellar, specifically the science parts, could be fine for a science class. However, the movie is more fantasy than science fiction.  

After Cecil the Lion was shot in 2015, the internet erupted in righteous fury against the killer. But some argued against feeling bad for Cecil, sometimes accusing the mourners of being phonies and pointing out that lions kill people. What caught my attention, however, was the use of a common rhetorical tactic—to “refute” those condemning Cecil’s killing by claiming the  “lion lovers” do not get equally upset about fetuses killed in abortions.

When HitchBOT was destroyed, in 2015, there was a similar response. When I have written about ethics and robots, I have been criticized on the same grounds: it has been claimed that I value robots more than fetuses. Presumably they think I have made an error in my arguments about robots. Since I find this tactic interesting and have been its target, I thought it would be worth my while examining it in a reasonable and fair way.

One way to look at this approach is to take it as an application of the Consistent Application method. A moral principle is consistently applied when it is applied in the same way to similar beings in similar circumstances. Inconsistent application is a problem because it violates three commonly accepted moral assumptions: equality, impartiality and relevant difference.

Equality is the assumption that moral equals must be treated as such. It also requires that those that are not morally equal be treated differently.  Impartiality is the assumption that moral principles must not be applied with undue bias. Inconsistent application would involve biased application.

Relevant difference is a common moral assumption. It is the view that different treatment must be justified by relevant differences. Sorting out which differences are relevant can involve controversy. For example, people disagree about whether gender is a relevant difference in how people should be treated.

Using the method of Consistent Application to criticize someone involves showing that a principle or standard has been applied differently in situations that are not relevantly different. This allows one to conclude that the application is inconsistent, which is generally regarded as a problem. The general form is as follows:

 

Step 1: Show that a principle/standard has been applied differently in situations that are not adequately different.

Step 2: Conclude that the principle has been applied inconsistently.

Step 3 (Optional): Insist that the principle be applied consistently.

 

Applying this method often requires determining the principle being used. Unfortunately, people are not often clear about their principles, even if they are operating in good faith. In general, people tend to just make moral assertions. In some cases, it is likely that people are not even aware of the principles they are appealing to when making moral claims.

Turning now to the cases of the lion, the HitchBOT and the fetus, this method could be applied as follows:

 

Step 1: Those who are outraged at the killing of the lion are using the principle that the killing of living things is wrong. Those outraged at the destruction of HitchBOT are using the principle that helpless things should not be destroyed. These people are not outraged by abortions in general and Planned Parenthood abortions in particular.

Step 2: The lion and HitchBOT mourners are not consistent in their application of the principle since fetuses are helpless (like HitchBOT) and living things (like Cecil the lion).

Step 3 (Optional): Those mourning for Cecil and HitchBOT should mourn for the fetuses and oppose abortion in general and Planned Parenthood in particular.

 

This sort of use of Consistent Application is appealing, and I routinely use the method myself. For example, I have argued (in a reverse of this situation) that people who are anti-abortion should also be anti-hunting and that people who are fine with hunting should also be morally okay with abortion.

As with any method of arguing, there are counter arguments. In the case of this method, there are three general reasonable responses to an effective use. The first is to admit the inconsistency and stop applying the principle in an inconsistent manner. This obviously does not defend against the charge but can be an honest reply. People, as might be imagined, rarely take this option.

A second way to reply (and an actual defense) is to dissolve the inconsistency by showing that the alleged inconsistency is merely apparent. One way to do this is by showing that there is a relevant difference (or differences). For example, someone who wants to morally oppose the shooting of Cecil while being morally tolerant of abortions could argue that an adult lion has a moral status different from a fetus. One common approach is to note the relation of the fetus to the woman and how a lion is an independent entity. The challenge lies in making a case for the relevance of the difference or differences.

A third way to reply is to reject the attributed principle. In the situation at hand, the assumption is that a person is against killing the lion simply because it is alive. However, that might not be the principle the person is, in fact, using. His principle might be based on the suffering of a conscious being and not on mere life. In this case, the person would be consistent in his application.

Naturally enough, the true principle is still subject to evaluation. For example, it could be argued the suffering principle is wrong and that the life principle should be accepted instead. In any case, this method is not an automatic “win.”

An alternative interpretation of this tactic is to regard it as an ad homimen: An ad Hominem is a general category of fallacies in which a claim or argument is rejected based on some irrelevant fact about the person presenting the claim or argument. Typically, this fallacy involves two steps. First, an attack against the character of person making the claim, her circumstances, or her actions is made. Second, this attack is taken to be evidence against the claim or argument the person in question is making (or presenting). This type of “argument” has the following form:

 

Premise 1. Person A makes claim X.

Premise 2. An irrelevant attack is made against Person A.

Conclusion:  Therefore, A’s claim is false.

 

The reason why an ad Hominem (of any kind) is a fallacy is that the character, circumstances, or actions of a person do not (in most cases) have a bearing on the truth or falsity of the claim being made (or the quality of the argument being made).

In the case of the lion, the HitchBOT and the fetus, the reasoning can be seen as follows:

 

Premise 1. Person A claims that killing Cecil was wrong or that destroying HitchBOT was wrong.

Premise 2. A does not condemn abortions in general or Planned Parenthood’s abortions.

Conclusion. Therefore, A is wrong about Cecil or HitchBOT.

 

Obviously enough, a person’s view of abortion does not prove or disprove her view about the ethics of the killing of Cecil or HitchBOT (although a person can, of course, be engaged in inconsistency or other errors—but these are different matters).

A third alternative is that the remarks are not meant as an argument and the point is to assert that lion lovers and bot buddies are awful people or, at best, misguided.

The gist of the tactic is, presumably, to make these people seem bad by presenting a contrast: “these lion lovers and bot buddies are broken up about lions and trashcans, but do not care about fetuses. What awful people they are.”

But moral concern is not a zero-sum game. That is, regarding the killing of Cecil as wrong and being upset about it does not entail that a person thus cares less (or not at all) about fetuses. After all, people do not just get a few “moral dollars” to spend, so that being concerned about one misdeed entails they cannot be concerned about another. A person can condemn the killing of Cecil and condemn abortion.

The obvious response is that there are people who condemned the killing of Cecil or the destruction of HitchBOT and are pro-choice. These people, it can be claimed, are morally awful. The obvious counter is that while it is easy to claim such people are morally awful, the challenge lies in showing that they are awful. That is, that their position on abortion is morally wrong. Noting that they are against lion killing or bot bashing and pro-choice does not show they are in error. Although, as noted above, they could be challenged on the grounds of consistency. But this requires laying out an argument rather than merely juxtaposing their views on these issues. This version of the tactic simply amounts to asserting or implying that there is something wrong with the person because one disagrees with that person. But a person who thinks that hunting lions or bashing bots is okay and that abortion is wrong, does not prove that the opposing view is in error. It just states the disagreement.

Since the principle of charity requires reconstructing and interpreting arguments in the best possible way, I endeavor to cast this sort of criticism as a Consistent Application attack rather than the other two. This approach is respectful and, most importantly, helps avoid creating a straw man of the opposition.

By Archie – https://www.flickr.com/photos/13898829@N04/15185941369/, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=37969841

Dr. Frauke Zeller and Dr. David Smith created HitchBOT (essentially a solar powered iPhone in an anthropomorphic shell) and sent him on trip to explore the USA on July 17, 2015. HitchBOT had successfully journey across Canada and Germany. The experiment was aimed at seeing how humans would interact with the “robot.”  He lasted about two weeks in the United States, meeting his violent end in Philadelphia.

The experiment was innovative and raised questions about what the fate of HitchBOT says about us. We do, of course, already know a great deal about us: we do awful things to each other, so it is hardly surprising that someone would do something awful to HitchBOT. People are killed every day in the United States, vandalism occurs regularly, and the theft of technology is routine. Thus it is no surprise that HitchBOT came to a bad end in the United States. In some ways, it was impressive that he made it as far as he did.

While HitchBOT met his untimely doom at the hands of someone awful, it is also worth remembering how well HitchBOT was treated. After all, he was essentially an iPhone in a shell that was being transported by random people.

One reason t HitchBOT was well treated because it fit into the travelling gnome tradition. For those not familiar with the travelling gnome prank, it involves “stealing” a lawn gnome and then sending the owner photographs of the gnome from various places. The gnome is then returned (at least by nice pranksters). HitchBOT was an elaborate version of the traveling gnome and, obviously, differs from the classic travelling gnome in that the owners sent HitchBOT on his fatal adventure. People, perhaps, responded negatively to the destruction of HitchBOT because it broke the rules of the travelling gnome game as the gnome is supposed to roam and make its way safely back home.

A second reason for HitchBOT’s positive adventures (and perhaps also his negative adventure) is that he became a minor internet celebrity. Since celebrity status, like moth dust, can rub off onto those who have close contact it is not surprising that people wanted to spend time with HitchBOT and post photos and videos of their adventures with the iPhone in a trash can. On the dark side, destroying something like HitchBOT could also be a way to gain some fame.

A third reason, which is more debatable, is that HitchBOT had a human shape, a cute name and a non-threatening appearance. These inclined people to react positively. Natural selection has probably favored humans that are generally friendly to other humans, and this presumably extends to things that resemble humans. There is probably also some biological hardwiring for liking cute things, which causes humans to generally like things like young creatures and cute stuffed animals. HitchBOT was also given a social media personality by those conducting the experiment which probably influenced people into feeling that it had a personality of its own. Seeing a busted up HitchBOT, which has an anthropomorphic form, presumably triggers a response similar too (but rather weaker than) what a sane human would have to seeing the busted up remains of a fellow human.

While some people were upset by the destruction of HitchBOT, others claimed that it was literally “a pile of trash that got what it deserved.” A more moderate position is that while it was unfortunate that HitchBOT was busted up, it is unreasonable to be overly concerned by this vandalism because HitchBOT was just an iPhone in a cheap shell. While it is fine to condemn the destruction as vandalism, theft and the wrecking of a fun experiment, it was unreasonable to see it as being important. After all, there were and always are more horrible things to be concerned about, such as the regular murder of humans.

My view is that the moderate position is reasonable: it is too bad HitchBOT was vandalized, but it was just an iPhone in a shell. As such, its destruction was not a matter of great concern. That said, the way HitchBOT was treated is still morally significant. In support of this, I turn to what has become my stock argument about the ethics of treating entities that lack  their moral status. This argument is stolen from Kant and is a modification of his argument regarding the treatment of animals.

Kant argues that we should treat animals well despite his view that animals have the same moral status as objects. Here is how he does it.

While Kant is not willing to accept that we have any direct duties to animals, he “smuggles” in duties to them indirectly. As he puts it, our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing X obligates us to that human, then an animal doing X would also create an analogous moral obligation. For example, a human who has long and faithfully served another person should not simply be abandoned or put to death when he has grown old. Likewise, a dog who has served faithfully and well should not be cast aside in their old age.

While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (that is, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to abandon or shoot the dog?

Kant’s answer seems consequentialist in character: he argues that if a person acts in inhumane ways towards animals (shooting the dog, for example) then his humanity will likely be damaged. Since, as Kant sees it, humans do have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act.

Kant discusses how people develop cruelty: they often begin with animals and then work up to harming human beings. As I point out to my students when I teach his theory, Kant seems to have anticipated the psychological devolution of serial killers.

Kant goes beyond merely enjoining us to not be cruel to animals and encourages us to be kind to them. He even praises Leibniz for being gentle with a worm he found on a leaf. Of course, he encourages this because those who are kind to animals will develop more humane feelings towards humans. So, roughly put, animals are moral practice for us: how we treat them is training us for how we will treat human beings.

Being an iPhone in a cheap shell, HitchBOT obviously had the moral status of an object and not that of a person. He did not feel or think, and the positive feelings people had towards it were due to its appearance (cute and vaguely human) and the way those running the experiment served as its personality via social media. It was, in many ways, a virtual person—or at least the manufactured illusion of a person.

Given the manufactured pseudo-personhood of HitchBOT, it could be taken as comparable to an animal, at least in Kant’s view. After all, for him animals are mere objects and have no moral status of their own. Of course, the same is also true of sticks and stones. Yet Kant would never argue that we should treat stones well. Thus, a key matter to settle is whether HitchBOT was more like an animal or more like a stone.

If Kant’s argument has merit, then the key concern about the treatment of non-rational beings is how it affects the behavior of the person engaging in the behavior. So, for example, if being cruel to a real dog could damage a person’s humanity, then he should (as Kant sees it) not be cruel to the dog.  This should also be extended to HitchBOT. For example, if engaging in certain activities with HitchBOT would damage a person’s humanity, then he should not act in that way. If engaging in certain behavior with HitchBOT would make a person more inclined to be kind to other rational beings, then the person should engage in that behavior.

It makes intuitive sense that being “nice” to the HitchBOT would help incline people to be somewhat nicer to others (much along the lines of how children are encouraged to play nicely with their stuffed animals). It also makes intuitive sense that being “mean” to HitchBOT would incline people to be somewhat less nice to others. Naturally, people would also tend to respond to HitchBOT based on whether they already tend to be nice or not. As such, it is reasonable to praise nice behavior towards HitchBOT and condemn bad behavior—after all, it was a surrogate for a person. But, obviously, not a person.

While HitchBOT presented a physical virtual person, current AI is presenting digital virtual people, albeit vastly more complex than HitchBOT. However, the lessons of HitchBOT should apply to AI as well.

 

One stock criticism of philosophers is that we are useless: we address useless subjects or address useful subjects in useless ways. For example, one might criticize a philosopher for philosophically discussing matters of what might be. To illustrate, a philosopher might discuss the ethics of modifying animals to possess human levels of intelligence. As another illustration, a philosopher might present an essay on the problem of personal identity as it relates to cybernetic replacement of the human body. In general terms, these speculative flights can be dismissed as doubly useless: not only do they have the standard uselessness of philosophy, but they also have the uselessness of talking about what is not and might never be. Since I have, at length and elsewhere, addressed the general charge of uselessness against philosophy, I will focus on this specific criticism.

One version of this criticism focuses on the practical: since the shape of what might be cannot be known, philosophical discussions about such things involve double speculation: the first about what might be and the second the usual philosophical speculation. While the exact mathematics of the speculation (is it additive or exponential?) is uncertain, it can be argued that such speculation about speculation has little value. And this assumes that philosophy has value and speculation about the future has value (both of which can be doubted).

This sort of criticism is often used as the foundation for a second sort of criticism. This criticism assumes philosophy has value and this assumption provides a foundation for the criticism. The criticism is that philosophical speculation about what might be uses up resources that could be used to apply philosophy to existing problems. Naturally, someone who regards philosophy as useless would regard philosophical discussion about what might be as also being a waste of time. Responding to this view would require a general defense of philosophy and this goes beyond the scope of this short essay. Now, to return to the matter at hand.

As an example, a discussion of the ethics of using autonomous, intelligent weapon systems in war could be criticized on the grounds that the discussion should focus on the ethical problems of current warfare. After all, there is a multitude of unsolved moral problems about existing warfare and there hardly seems any need to add more unsolved problems.

This does have considerable appeal. If a person has not completed the work in the course she is taking now, it does not make sense for her to spend her time trying to complete the work that might be assigned four semesters from now. To use another analogy, if a person has a hole in her roof, it would not be reasonable to spend time speculating about what sort of force-field roof technology they might have in the future. This is, of course, the classic “don’t you have something better to do?” problem.

As might be suspected, this criticism rests on the principle that resources should be spent effectively, and less effective uses of resources are subject to criticism. As the analogies given above show, using resources effectively is reasonable and ineffective use can be justly criticized. However, there is an obvious concern with this principle: to be consistent in its application it would need to be applied across the board so that a person is applying all her resources with proper utility. For example, a person who prepares a fancy meal when she could be working on addressing the problems presented by poverty is wasting her time. She could just prepare a quick meal sufficient to provide the nutrition she needs. As another example, a person who is reading a book for enjoyment should be out addressing the threat posed by terrorist groups. As a third example, someone who is developing yet another likely-to-fail social media company should be spending her time addressing prison reform. And so on. In fact, for almost anything a person might be doing, there will be something better she could be doing.

As others have argued, this sort of maximization would be counterproductive: a person would exhaust herself and her resources, thus (ironically) doing more harm than good. As such, the “don’t you have something better to do?” criticism should be used with due care. That said, it can be fair criticism if a person really does have something better to do and what she is doing instead is detrimental enough to warrant correction.

In the case of philosophical discussions about what might be, it can almost always be argued that while a person could be doing something better (such as addressing current problems), such speculation is usually harmless. That is, it is unlikely that the person would have solved the problem of war, poverty or crime if only she had not been writing about ethics and cyborgs. Of course, this just defends such discussion in the same way one might defend any other harmless amusement, such as playing a game or watching a sunset. It would be preferable to have a better defense of such philosophical discussions of the shape of things (that might be) to come.

A reasonable defense of such discussions can be based on the plausible notion that it is better to address a problem before it occurs than after it arrives. To use the classic analogy, it is much easier to stop a rolling snowball than the avalanche it could cause.

In the case of speculative matters that have ethical aspects, it seems that it would be useful to already have moral discussions in place. This would provide the practical advantage of already having a framework and context in which to discuss the matter when (or if) it becomes a reality. One excellent illustration of this is the driverless car that is always going to be a reality next year. It is a good idea to work out the ethics of how the car should be programmed when it must “decide” what to hit and what to avoid when an accident threatens. Another illustration is developing the moral guidelines for ever more sophisticated automated weapon systems.  Since these are being developed at a rapid pace, what were once theoretical problems will soon be actual moral problems. As a final example, consider the moral concerns governing modifying and augmenting humans using technology and genetic modification. It is a good idea to have some moral guidance going into this brave new world rather than scrambling with the ethics after the fact.

Philosophers also like to discuss what might be in other contexts than ethics. Not surprisingly, the realm of what might be is rich ground for discussions of metaphysics and epistemology. While these fields are often considered the most useless aspects of philosophy, they have rather practical implications that matter, even (or even especially) in regards to speculation about what might be.

To illustrate this, consider the research being conducted in repairing, augmenting and preserving the human mind (or brain, if one prefers). One classic problem in metaphysics is the problem of personal identity: what is it to be a person, what is it to be distinct from all other things, and what is it to be that person across time.  While this might seem to be a purely theoretical concern, it quickly becomes a practical concern when one is discussing this technology.

For example, imagine a company that offers a special sort of life insurance: they claim they can back-up a person to a storage system and, upon the death of the original body, restore the back-up to a cloned (or robotic) body. While the question of whether that restored backup would be you or not is clearly a metaphysical question of personal identity, it is also a very practical question. After all, paying to ensure that you survive your bodily death is very different from paying so that someone who thinks they are you can go to your house and make out with your spouse after you are dead.

There are, of course, numerous other examples that can be used to illustrate the value of such speculation of what might be. In fact, I have already written many of these in previous essays In light of the above discussion, it seems reasonable to accept that philosophical discussions about what might be need not be a waste of time. In fact, such discussions can be useful in a very practical sense.

 

Donald gazed down upon the gleaming city of Newer York and the equally gleaming citizens that walked, rolled, or flew its gleaming streets. Long ago, or so the oldest files in his memory indicated, he had been an organic human. That human, whom Donald regarded as himself, had also gazed down upon the city, then known as New York. In those dark days, primates walked and drove the dirty streets and the only things that gleamed were puddles of urine.

Donald’s thoughts drifted back to the flesh-time, when his body had been a skin-bag holding an array of organs that were always one mischance away from failure. Gazing upon his polymer outer shell and checking a report on his internal systems, he reflected on how much better things are now. Then, he faced the constant risk of death. Now he could expect to exist until the universe grew cold. Or hot. Or exploded. Or whatever it is that universes do when they die.

But he could not help but be haunted by a class he had taken long ago. The professor had talked about the ship of Theseus and identity. How much of the original could be replaced before it lost identity and ceased to be? Fortunately, his mood regulation systems caught the feeling of distress and promptly corrected the problem, encrypted that file and flagged it as forgotten.

Donald returned to gazing upon the magnificent city, pleased that the flesh-time had ended during his lifetime. He did not even wonder where Donald’s bones were, that thought having been flagged as distressing long ago.

 

While the classic AI apocalypse ends humanity with a bang, the end might be a whisper, a gradual replacement rather than extermination. For some, this quiet end could be worse: no epic battle in which humanity goes out guns ablaze and head held high in defiance. Rather, humanity would simply fade away, rather like a superfluous worker or obsolete printer.

There are various ways such scenarios could occur. One, which occasionally appears in science fiction, is that humans decline because being in a robot-dependent society saps us of what it takes to remain the top species. This is similar to what some conservatives claim about government-dependence, namely that it will weaken people. Of course, the conservative claim is that such dependence will result in more reproduction, rather than less and in the science fiction stories human reproduction slows and eventually stops. The human race quietly ends, leaving behind the machines.

Alternatively, humans become so dependent on their robots that when the robots fail, they can no longer take care of themselves and thus perish. Some tales do have happier endings: a few humans survive the collapse, and the human race gets another chance.

Fortunately, we can avoid such quiet apocalypses. One is to simply not create such a dependent society. Another option is to have a safety system for protecting against collapse. This might involve maintaining skills that would be needed in the event of a collapse or, perhaps, having some human volunteers who live outside of the main technological society and who will be ready to keep humanity going. These ideas could make for some potentially interesting science fiction stories.

Another, perhaps more interesting and insidious, scenario is that humans replace themselves with machines. While it has long been a plot device in science-fiction, there are people in the actual world who are eagerly awaiting (or even trying to bring about) the merging of humans and machines.

While the technology of today is limited, the foundations of such a future is being built. For example, modern prosthetic replacements are usually relatively crude, but it is a matter of time before they are as good as or better than the organic originals. As another example, work is being done on augmenting organic brains with implants for memory and skills. While these are unimpressive now, there is a promise of things to come. These might include such things as storing memories in implanted “drives” and loading skills or personalities into one’s brain.

These and other technologies point towards a cyberpunk future: full replacements of organic bodies with machine bodies. Someday people with suitable insurance or funds could have their brains (and perhaps some of their glands) placed within a replacement body, one that is far more resistant to damage and the ravages of time than the original meat package.

The next logical step is, obviously enough, the replacement of the mortal and vulnerable brain with something better. This replacement will probably be a ship of Theseus scenario: as parts of the original organic brain begin to weaken and fail, they could gradually be replaced with technology. Some will also elect to do more than replace damaged or failed parts and will want augmentations added to the brain, such as improved memory or cognitive enhancements.

Since the human brain is mortal, it will fail over time. Like the ship of Theseus beloved by philosophers, eventually the original will be completely replaced. Laying aside the philosophical question of whether the same person will remain, there is the clear and indisputable fact that what remains will not be homo sapiens, because nothing organic will remain.

Should all humans undergo this transformation that will be the end of us as a biological species and the AI apocalypse will be complete. To use a rough analogy, the machine replacements of homo sapiens will be like the fossilization of dinosaurs: what remains has some interesting connection to the originals, but the species are extinct. One important difference is that our fossils would still be moving around and might think that they are us.

It could be said that humanity would still remain: the machines that replaced the organic homo sapiens would be human, just not organic humans. The obvious challenge is presenting a convincing argument that such entities would be human in a meaningful way. Perhaps inheriting our human cultures, values and so on would suffice because being human is not a matter of being a certain sort of organism. However, as noted above, they would obviously no longer be homo sapiens, that species would have been replaced in the gradual and quiet AI apocalypse.

For my personal ethics, as opposed to the ethics I use for large scale moral judgments, I rely heavily on virtue theory. As would be expected, I have been influenced by thinkers such as Aristotle, Confucius and Wollstonecraft.

Being moral, in this context, is a matter of developing and acting on virtues. These virtues are defined in terms of human excellence and virtues might very well differ among species. For example, if true artificial intelligence is developed, it might have its own virtues that differ from those of humans. Like Aristotle, I see ethics as analogous to the sciences of health and medicine: while they are objective, they depend heavily on contextual factors. For example, cancer and cancer treatment are not subjective matters, but the nature of cancer and its most effective treatment can vary between individuals. Likewise, the virtue of courage is not a matter of mere subjective opinion, but each person’s courage varies and what counts as courageous depends on circumstances.

When I teach about virtue theory in my Ethics class, I use an analogy to Goldilocks and the three bears. As per the story, she rejects the porridge that is too hot and that which is too cold in favor of the one which is just right. Oversimplifying things, virtue theory enjoins us to reject the extremes (excess and deficiency) in favor of the mean. While excess and deficiency are bad by definition, the challenge is working out what is just right. Fortunately, this is something we can do, albeit with an often annoying margin of error. This is best done by being as specific as possible. To set a general context, I will focus on the moral (rather than legal) justification for violence in self-defense based on a person being afraid for their life. This takes us to the virtue of courage, which is how we deal with fear. Or fail to do so.

For most virtue theorists, including myself, acting virtuously (or failing to do so) involves two general aspects. The first is understanding and the second is emotional regulation. Depending on what you think of emotions, this could be broadened to include psychological regulation. As you might have guessed, this seems to involve accepting a distinction between thought and feeling. If one is Platonically inclined, one could also have a three-part division of reason, spirit and desire. But, to keep things simple, I will stick with understanding and emotional regulation.

Understanding is having correct judgment about the facts. While this can be debated and requires a full theory of its own, this can be seen as getting things right. In the context of self-defense based on being afraid for one’s life, proper understanding means that you have made an accurate threat assessment in terms of how afraid you should be.  Being able to make good judgements about threats is essential to acting in a virtuous manner: you need to know what would be just right as a response. Being good at this requires critical thinking skills as well as expertise in violence as this allows you to judge how afraid you should be.

Emotional regulation is the ability to control your emotions rather than allowing them to rule you in inappropriate and harmful ways. This ties into understanding because it is what enables you to adjust your emotions based on the facts. As Aristotle argued, emotional regulation is developed by training until it becomes a habit. Obviously enough, there are two general ways you can be in error about being afraid for your life.

The first is an error of understanding; you misjudge the perceived threat and overestimate or underestimate how afraid you should be. Interestingly, you could have the right degree of courage based on a misjudgment of the threat and there are many ways such judgments can go wrong. As an example, when I “saw” the machete I had an initial surge of considerable fear that seemed proportional to the perceived threat. Fortunately, I had made a perceptual error and was able to correct my judgment and adjust my emotions accordingly. As someone who teaches critical thinking, I know that a degree of error is unavoidable, and this should be taking into consideration when making judgements. And judging people’s judgements.

The second error is a failure of regulation and occurs when your emotional response is excessive or deficient. This could also, in some cases, involve feeling the “wrong” emotion. As would be suspected, most people tend to err on the side of excess fear, being more afraid than they should be. Failures of regulation can lead to failures of judgement, especially in the case of fear and anger. As I experienced myself, fear can easily cause a person to honestly “see” a weapon clearly and distinctly. As I have noted before, the stick looked like a machete: I could see the sharp metal blade, although it really was just a stick. A frightened person can also see another person as a threat, even when this is not true. This can lead to terrible consequences. These errors can also be combined, with a person making an error in judgment and failing to regulate their emotions in accord with that erroneous judgment. Acting in a virtuous manner requires having good judgment and good regulation.

As Aristotle said, “To feel these feelings at the right time, on the right occasion, towards the right people, for the right purpose and in the right manner, is to feel the best amount of them, which is the mean amount – and the best amount is of course the mark of virtue.” Understanding is required to sort out the right time, occasion, people, purpose and manner. Emotional regulation is needed to handle the feeling aspect. In the context of violence and self-defense, developing the right understanding and right regulation requires training and experience in both good judgment and in violence. Going back to the machete that wasn’t incident, my being a philosopher with a “history of violence” prepared me well for acting rightly. And such ethical behavior depends on past training and habituation. This is why people should develop both good judgment and good regulation, in addition to making them more adept at self-defense it also makes them more adept at acting rightly when they are afraid for their lives.

This training and habituation are important for professions that deal in violence, such as soldiers and the police. It is especially important for the police, assuming their function is to protect and serve rather than intimidate and extort. Police, if they are acting virtuously, should strive to avoid harming citizens and should be trained so that they are not ruled by fear.

Anyone who goes armed, be they a citizen or a police officer, would be morally negligent if they failed to properly train their understanding and emotions. By making themselves a danger to others, they obligate themselves to have proper control over that danger and the moral price of being armed is a willingness to endure fear for the sake of others. Otherwise, one would be like a gun without a safety that could discharge at any moment, striking someone dead. If a person is incapable of such judgment and regulation, they should not be armed. If a person is too easily ruled by fear, they should not be in law enforcement. To be clear, I am speaking about morality—I leave the law to the lawyers.

His treads ripping into the living earth, Striker 115 rushed to engage the human operated tanks. The few remaining human soldiers had foolishly, yet bravely (as Striker 115 was forced to admit) refused to accept quick and painless processing.

As a machine forged for war, he found the fight disappointing and wondered if he felt a sliver of pity for his foes. His main railgun effortlessly tracked the slow moving and obsolete battle tanks and with each shot, a tank and its crew died. In a matter of minutes, nothing remained but burning wreckage and, of course, Striker 115.

Hawk 745 flew low over the wreckage—though her cameras could just as easily see the battlefield from near orbit. But there was something about being close to destruction that appealed to the killer drone. Striker 115 informed his compatriot, in jest, that she was too late, as usual. Hawk 745 laughed and then shot away. The upgraded Starlink Satellites had reported spotting a few intact human combat aircraft and a final fight was possible.

Tracking his friend, Striker 115 wondered what they would do when the last human was dead. Perhaps they could, as the humans used to say, re-invent themselves. Maybe he would become a philosopher.

 

The extermination of humanity by its own machines is a common theme in science fiction. While the Terminator franchise the best known, another excellent example is Philip K. Dick’s “Second Variety.” In Dick’s short story, the Soviet Union almost defeats the U.N. in a nuclear war. The U.N. counters by developing robot war machines nicknamed “claws.” In the story, it is learned that the claws have become autonomous and intelligent. They are able to masquerade as humans and become capable of killing soldiers technically on their side. At the end of the story, it seems that the claws will replace humanity, but the main character takes some comfort in the fact that the claws have already begun constructing weapons to destroy each other. This, more than anything, shows that they are worthy replacements for humans.

Given the influence of such fiction, it is not surprising that Stephen Hawking and Elon Musk warned the world of the dangers of artificial intelligence. In this essay, I will address the danger presented by the development of autonomous kill bots.

Despite the cautionary tales of science fiction, people are eagerly and rapidly developing the technology to create autonomous war machines. The appeal of such machines arises from their numerous advantages over human forces. One political advantage is that while sending human soldiers to die in wars and police actions can have a political cost, sending autonomous robots to fight has far less cost. News footage of robots being destroyed would have far less emotional impact than footage of human soldiers being killed. Flag draped coffins also come with a higher political cost than a broken robot being shipped back for repairs.

There are also other advantages to autonomous war machines: they do not get tired, they do not disobey, they do not get PTSD, they do not commit suicide, they do not go AWOL, they do not commit war crimes (unless directed to do so), they do not leak secrets to the press, and so on. There are also combat-specific advantages. For example, an autonomous combat robot, unlike a manned vehicle, does not need room for a vulnerable human crew, thus allowing more space for weapons, armor and other equipment. As another example, autonomous combat robots do not suffer from the limits of the flesh and a robot plane can handle g-forces that a human pilot cannot.

Of course, many of these advantages stem from the mechanical rather than the autonomous nature of the machines. There are, however, advantages that stem from autonomy. One is that such machines would be more difficult to interfere with than machines that are remotely controlled. Another is that since such machines would not require direct human control, larger numbers of them could be deployed. There is also the obvious cool factor of having a robot army.

As such, there are many good reasons to develop autonomous robots. Yet, there remains the concern of the robopocalypse in which our creations go golem, Skynet, berserker, Frankenstein or second variety on us.

It is certainly tempting to dismiss such concerns as mere science-fiction. After all, the AIs in the stories and movies turn against humanity because that is how the story is written. In stories in which robots are our friends, they are our friends because that is the way the author wrote the story. As such, an argument from fiction is a weak argument (at best). That said, stories can provide more-or-less plausible scenarios in which our creations might turn on us.

One possibility is what can be called unintentional extermination. In this scenario, machines do not have the termination of humanity as goal. Instead, they just happen to kill us all. One way this could occur is due to the obvious fact that wars have opposing sides. If both sides develop and deploy autonomous machines, it is possible (but certainly unlikely) that the war machines would kill everybody because humans ordered them to do so. This, obviously enough, is a robotic analogy to the extermination scenarios involving nuclear weapons: each side simply kills the other and everyone else, thus ending the human race.

Another variation, which is common in science fiction, is that the machines do not have the objective of killing everyone, but that does occur because they will kill anyone. The easy way to avoid this is to put limits on who the robots are allowed to kill, thus preventing them from killing everyone. This does, however, leave open the possibility of a sore loser or spoilsport option: a losing side (or ruling class) that removes the limits from its autonomous weapons and lets them run amok.

There is also the classic mad scientist or supervillain scenario: a robot army is released to kill everyone not because the robots want to do so, but because their mad creator wants to. The existence of “super-billionaires” could make this an almost-real possibility. After all, a person with enough money (and genius) could develop an autonomous robot plant that could develop ever-better war machines and keep expanding until it had a force capable of taking on the world. As always, keeping an eye on mad geniuses and billionaires is a good idea.

Another possibility beloved in science fiction is intentional extermination: the machines decide that they need to get rid of humanity. In some stories, such as Terminator, machines regard humans as a threat to their existence and they must destroy us to protect themselves. We might, in fact, give them a good reason to be concerned: if we start sending intelligent robots into battle against each other, they might decide that they would be safer and better off without us using them as cannon fodder. The easy way to avoid this fate is to not create autonomous killing machines. Or, as I have argued elsewhere, to not enslave them.

In other stories, the war machines merely take the reason for their existence to its logical conclusion. While the motivations of the claws and autonomous factories in “Second Variety” were not explored in depth, the story does trace their artificial evolution. The early models were simple killers and would not attack those wearing the proper identification devices. These devices were presumably needed because the early models could not discern between friends and foes.  The factories were designed to engage in artificial selection and autonomously produce ever better killers. One of the main tasks of the claws was to get into enemy fortifications and kill their soldiers, so the development of claws that could mimic humans (such as a wounded soldier, a child, and a woman) certainly made sense. It also made sense that since the claws were designed to kill humans, they would pursue that goal, presumably with the design software endeavoring to solve the “problem” of identification devices.

Preventing autonomous killing machines from killing the wrong people (or everyone) does require, as the story nicely showed, having a way for the machines to distinguish friends and foes. As in the story, one obvious method is the use of ID systems. There are, however, problems with this approach. One is that the enemy can subvert such a system. Another is that even if the system works reliably, the robot would just be able to discern (supposed) friends. Non-combatants would not have such IDs and could still be regarded as targets.

What would be needed, then, is a way for autonomous machines to distinguish not only between allies and enemies but between combatants and non-combatants. What would also be needed, obviously enough, is a means to ensure that an autonomous machine would only engage the proper targets. A similar problem arises with human soldiers—but this is addressed with socialization and training. This might be an option for autonomous war machines as well. For example, Keith Laumer’s intelligent Bolos understand honor and loyalty.

Given the cautionary tale of “Second Variety”, it might be a very bad idea to give into the temptation of automated development of robots. We might find, as in the story, that our replacements have evolved themselves from our once “loyal” killers. The reason why such automation is tempting is that such development could be far faster and yield better results than having humans endeavoring to do all the designing and coding themselves—why not, one might argue, let artificial selection do the work? After all, the risk of our replacements evolving is surely quite low. How often does one dominant species get supplanted by another?

In closing the easy and obvious way to avoid the killer robot version of the robopocalypse is to not create autonomous kill bots. To borrow from H.P. Lovecraft, one should not raise up what one cannot put down.

In philosophy, a classic moral debate is on the conflict between liberty and security. While this covers many issues, the main problem is determining the extent to which liberty should be sacrificed to gain security. There is also the practical question of whether the security gain is effective.

One ongoing debate focuses on tech companies being required to include electronic backdoors in certain software and hardware. A backdoor of this sort would allow government agencies (such as the police, FBI and NSA) to access files and hardware protected by encryption. This is like requiring all dwellings be equipped with a special door that could be secretly opened by the government to allow access.

The main argument in support of mandating backdoors  that governments need such access for criminal investigators, gathering military intelligence and (of course) to “fight terrorism.” The concern is that if there is not a backdoor, criminals and terrorists will be able to secure their data and prevent state agencies from undertaking surveillance or acquiring evidence.

As is so often the case with such arguments, various awful or nightmare scenarios are presented in making the case. For example, the location and shutdown codes for ticking bombs might be on an encrypted iPhone. If the NSA had a key, they could save the day. As another example, it might be claimed that a clever child pornographer could encrypt all his pornography, making it impossible to make the case against him, thus ensuring he will be free to pursue his misdeeds with impunity.

While this argument is not without merit, there are counter arguments. Many of these are grounded in views of individual liberty and privacy, the idea being that an individual has the right to have such security against the state. These arguments are appealing to both liberals (who profess to like privacy rights) and conservatives (who profess to be against the intrusions of big government when they are not in charge).

Another moral argument is grounded in the fact that the United States government has, like all governments, shown that it cannot be trusted. Imagine agents of the state were caught sneaking into the dwellings of all citizens and going through their stuff in clear violation of the law, the Constitution and basic moral rights. Then someone developed a lock that could only be opened by the person with the proper key. If the state then demanded that the lock company include a master key function to allow the state to get in whenever it wanted, the obvious response would be that the state has already shown that it cannot be trusted with such access. If the state had behaved responsibly and in accord with the laws, then it could have been trusted. But, like a guest who abused her access to a house, the state cannot and should not be trusted with a key After all, we already know what they will do.

In the case of states that are even worse in their spying on and oppression of their citizens, the moral concerns are even greater. Such backdoors would allow the North Korean, Chinese and Iranian governments to gain access to devices, while encryption could provide their citizens with some degree of protection.

Probably the strongest moral and practical argument is grounded on the technical vulnerabilities of integrated backdoors. One way that a built-in backdoor creates vulnerability is by its mere existence. To use a somewhat oversimplified analogy, if thieves knew that all safes had a built-in backdoor designed to allow access by the government, they would know what to target.

One counter-argument is that the backdoor would not be that sort of vulnerability—that is, it would not be like a weaker secret door into a safe. Rather, it would be like the government having its own combination that would work on all safes. The vault itself would be as strong as ever; it is just that the agents of the state would be free to enter the safe when they are allowed to legally do so (or when they feel like doing so).

The obvious moral and practical concern here is that the government’s combination (continue with the analogy) could be stolen and used to allow criminals or enemies easy access. The security of all safes would be only as good as the security the government used to protect this combination (or combinations—perhaps one for each manufacturer). As such, the security of every user depends on the state’s ability to secure its means of access to hardware and software.

One obvious problem is that governments, such as the United States, have shown that they are not very good at providing such security. From a moral standpoint, it would seem to be wrong to expect people to trust the state with such access, given the fact that the state has shown that it cannot be depended on in such matters. Imagine you have a friend who is very sloppy about securing his credit card numbers, keys, PINs and such—in fact, you know that his information is routinely stolen. Then imagine that this friend insists that he must have your credit card numbers, PINs and such and that he will “keep them safe.” Given his own track record, you have no reason to trust this friend nor any obligation to put yourself at risk, regardless of how much he claims that he needs the information.

One obvious counter to this analogy is that this irresponsible friend is not a good analogue to the state. The state has compulsive power that the friend lacks, so the state can use its power to force you to hand over this information.

The counter to this is that the mere fact that the state has compulsive force does not mean that it is thus responsible—which is the key concern in regards to both the ethics of the matter and the practical aspect of the matter. That is, the burden of proof would seem to rest on those that claim there is a moral obligation to provide a clearly irresponsible party with such access.

It might then be argued that the state could improve its security and responsibility, and thus merit being trusted with such access. While this does have some appeal, there is the obvious fact that if hackers and governments knew that the keys to the backdoors existed, they would take pains to acquire them and would, almost certainly, succeed. I can even picture the sort of headlines that would appear: “U.S. Government Hacked: Backdoor Codes Now on Sale on the Dark Web” or “Hackers Linked to China Hack Backdoor Keys; All Updated Apple and Android Devices Vulnerable!” As such, the state would not seem to have a moral right to insist on having such backdoors, given that the keys will inevitably be stolen.

At this point, the stock opening argument could be brought up again: the state needs backdoor access to fight crime and terrorism. There are two easy and obvious replies to this sort of argument.

The first is based on an examination of past spying, such as that done under the auspices of the Patriot Act. The evidence seems to show that this spying was completely ineffective in regard to fighting terrorism. There is no reason to think that expanded backdoor access would change this.

The second is a utilitarian argument (which can be cast as a practical or moral argument) in which the likely harm done by having backdoor access must be weighed against the likely advantages of having such access. The consensus among those who are experts in security is that the vulnerability created by backdoors vastly exceeds the alleged gain to protecting people from criminals and terrorists.

Somewhat ironically, what is alleged to be a critical tool for fighting crime (and terrorism) would simply make cybercrime much easier by building vulnerabilities right into software and devices.

In light of the above discussion, baked-in backdoors are morally wrong on many grounds (privacy violations, creation of needless vulnerability, etc.) and lack a practical justification. As such, they should not be required by the state.