As the Future of Life Institute’s open letter shows, people are concerned about the development of autonomous weapons. This concern is reasonable, if only because any weapon can be misused to advance evil goals. However, a strong case can be made in favor of autonomous weapons.

As the open letter indicated, a stock argument for autonomous weapons is that their deployment could result in decreased human deaths. If, for example, an autonomous ship is destroyed in battle, then no humans will die on that ship. It is worth noting that the ship’s AI might eventually be a person, thus there could be one death. In contrast, the destruction of a crewed warship could result in hundreds of deaths. On utilitarian grounds, the use of autonomous weapons would seem morally fine, at least if their deployment reduced the number of deaths and injuries.

The open letter expresses, rightly, concerns that warlords and dictators will use autonomous weapons. But this might be an improvement over the current situation. These warlords and dictators often conscript their troops and some, infamously, enslave children to serve as their soldiers. While it would be better for a warlord or dictator to have no army, it seems morally preferable for them to use autonomous weapons rather than them using conscripts and children.

It can be replied that the warlords and dictators would just use autonomous weapons in addition to their human forces, thus there would be no saving of lives. This is worth considering. But, if the warlords and dictators would just use humans anyway, the autonomous weapons would not seem to make much of a difference, except in terms of giving them more firepower, something they could also accomplish by using the money spent on autonomous weapons to better train and equip their human troops.

At this point, it is only possible to estimate (guess) the impact of autonomous weapons on the number of human casualties and injuries. However, it seems somewhat more likely they would reduce human casualties, assuming that there are no other major changes in warfare.

A second appealing argument in favor of autonomous weapons is that smart weapons are smart. While an autonomous weapon could be designed to be imprecise, the general trend in smart weapons has been towards ever increasing precision. Consider, for example, aircraft bombs and missiles. In the First World War, these bombs were primitive and inaccurate (they were sometimes thrown from planes by hand). WWII saw some improvements in bomb sights and unguided rockets were used. In following wars, bomb and missile technology improved, leading to the smart bombs and missiles of today that have impressive precision. So, instead of squadrons of bombers dropping tons of dumb bombs on cities, a small number of aircraft can engage in relatively precise strikes against specific targets. While innocents still perish in these attacks, the precision of the weapons has made it possible to greatly reduce the number of needless deaths. Autonomous weapons could be even more precise, thus reducing causalities even more. This seems to be desirable.

In addition to precision, autonomous weapons could (and should) have better target identification capacities than humans. If recognition software continues to irmpove, it is easy to imagine automated weapons that can rapidly distinguish between friends, foes, and civilians. This would reduce deaths from friendly fire and unintentional killings of civilians. Naturally, target identification would not be perfect, but autonomous weapons could be better than humans since they do not suffer from fatigue, emotional factors, and other things that interfere with human judgement. Autonomous weapons would presumably also not get angry or panicked, thus making it far more likely they would maintain target discipline (only engaging what they should engage).

To make what should be an obvious argument obvious, if autonomous vehicles and similar technology are supposed to make the world safer, then it would seem to follow that autonomous weapons could do something similar for warfare. But this does lead to a reasonable concern: driverless cars seem to be the future of transportation in the sense that they will always be in the future. If getting an autonomous car to operate safely on the streets is far beyond current technology, then getting an autonomous weapon system to operate “safely” in the chaos of battle seems all but impossible.

It can be objected that autonomous weapons could be designed to lack precision and to kill without discrimination. For example, a dictator might have massacrebots to deploy in cases of civil unrest. These robots would slaughter everyone in the area. Human forces, one might contend, would often show at least some discrimination or mercy.

The easy and obvious reply to this is that the problem is not in the autonomy of the weapons but the way they are being used. The dictator could achieve the same results (mass death) by deploying a fleet of drones loaded with demolition explosives, but this would presumably not be reasons to have a ban on drones or explosives. There is also the fact that dictators, warlords and terrorists can easily find people to carry out their orders, no matter how awful they might be. That said, it could still be argued that autonomous weapons would result in more murders than would the use of human killers.

A third argument in favor of autonomous weapons rests on the claim advanced in the open letter that autonomous weapons will become cheap to produce, analogous to Kalashnikov rifles. On the downside, as the authors argue, this could result in the proliferation of these weapons. On the plus side, if these highly effective weapons are so cheap to produce, this could enable existing militaries to phase out their incredibly expensive human operated weapons in favor of cheap autonomous weapons. By replacing humans, these weapons could also create savings in terms of the cost of recruitment, training, food, medical treatment, and retirement. This would allow countries to switch that money to more positive areas, such as education, infrastructure, social programs, health care and research. So, if the autonomous weapons are as cheap and effective as the letter claims, then it would seem to be a great idea to use them to replace existing weapons.

But there is the reasonable concern that decisions about military spending in some countries is not based on a rational assessment of costs and benefits. Such spending can be aimed at diverting resources from social programs and into the coffers of corporations. In such cases the availability of cheap, effective weapons would not meaningfully change defense spending.

A fourth argument in favor of autonomous weapons is that they could be deployed, at low political cost, on peacekeeping operations. Currently, the UN must send human troops to dangerous areas. These troops are often outnumbered and ill-equipped relative to the challenges they face. However, if autonomous weapons were as cheap and effective as the letter claims, then they would be ideal for these missions. Assuming they are cheap, the UN could deploy a much larger autonomous weapon force for the same cost as deploying a human force. There would also be far less political cost as people who might balk at sending their fellow citizens to keep peace in some war zone will probably be fine with sending robots.

An extension of this argument is that autonomous weapons could allow the nations of the world to engage terrorist groups, such as was the case with ISIS, without having to pay the high political cost of sending in human forces. The cheap and effective weapons predicted by the letter would seem ideal for this task.

Considering the above arguments, it seems that autonomous weapons should be developed and deployed. However, the concerns of the letter do need to be addressed. As with existing weapons, there should be rules governing the use of autonomous weapons (although much of their use would fall under existing rules and laws of war) and efforts should be made to keep them from proliferating to warlords, terrorists and dictators. As with most weapons, the problem lies with the misuse of the weapons and not with the weapons themselves.

Back on July 28, 2015 the Future of Life Institute released an open letter expressing opposition to the development of autonomous weapons. As of this writing, you can still sign it. Although the name of the organization sounds like one I would use as a cover for an evil, world-ending cult in my Call of Cthulhu campaign, I assume this group is sincere in its professed values. While I do respect their position, I believe they are mistaken. I will assess and reply to the arguments in the letter.

As the letter notes, an autonomous weapon can select and engage targets without human intervention. A science fiction example of such a weapon is the claw of Philip K. Dick’s classic “Second Variety.” A real world example , albeit a stupid one, is the land mine: they are placed and engage automatically.

The first main argument presented in the letter is a proliferation argument. If a major power pushes AI development, the other powers will also do so, creating an arms race. This will lead to the development of cheap, easy to mass-produce AI weapons. These weapons, it is claimed, will end up being acquired by terrorists, warlords, and dictators. These people will use these weapons for assassinations, destabilization, oppression and ethnic cleansing. That is, for what these people already use existing weapons to do. This raises concern about whether autonomous weapons would have a significant impact.

The authors of the letter have a reasonable point: as science fiction stories have long pointed out, killer robots tend to obey orders and they can (in fiction) be extremely effective. However, history has shown that terrorists, warlords, and dictators rarely have trouble finding humans who are willing to commit evil. Humans are also quite good at doing evil and although killer robots are awesomely competent in fiction, it remains to be seen if they will be better than humans in the real world. Especially cheap, mass-produced weapons.

That said, it is reasonable to be concerned that a small group or individual could buy a cheap robot army when they would otherwise not be able to put together a human force. These “Walmart” warlords could be a real threat in the future, although small groups and individuals can already do significant damage with existing technology, such as homemade bombs. They can also easily create weaponized versions of non-combat technology, such as civilian drones and autonomous cars. Even if robotic weapons are not manufactured, enterprising terrorists and warlords can build their own. Think, for example, of a self-driving car equipped with machine guns or loaded with explosives.

A reasonable reply is that the warlords, terrorists and dictators would have a harder time without cheap, off the shelf robotic weapons. This, it could be argued, would make the proposed ban on autonomous weapons worthwhile on utilitarian grounds: it would result in less deaths and less oppression.

The authors then claim that just as chemists and biologists are generally not in favor of creating chemical or biological weapons, most researchers in AI do not want to design AI weapons. They do argue that the creation of AI weapons could create a backlash against AI in general, which has the potential to do considerable good (although there are those who are convinced that even non-weapon AIs will wipe out humanity).

The authors do have a reasonable point here. Members of the public can panic over technology in ways that can impede the public good. One example is vaccines and the anti-vaccination movement. Another example is the panic over GMOs that is having some negative impact on the development of improved crops. But, as these two examples show, backlash against technology is not limited to weapons, so the AI backlash could arise from any AI technology and for no rational reason. A movement might arise, for example, against autonomous cars. Interestingly, military use of technology seems to rarely create backlash from the public. People do not refuse to fly in planes because the military uses them to kill people. Most people also love GPS, which was developed for military use.

The authors note that chemists, biologists and physicists have supported bans on weapons in their fields. This might be aimed at attempting to establish an analogy between AI researchers and other researchers, perhaps to try to show these researchers that it is a common practice to be in favor of bans against weapons in one’s area of study. Or, as some have suggested, the letter might be making an analogy between autonomous weapons and weapons of mass destruction (biological, chemical and nuclear weapons).

One clear problem with the analogy is that biological, chemical and nuclear weapons tend to be the opposite of robotic smart weapons: they “target” everyone without any discrimination. Nerve gas, for example, injures or kills everyone. A nuclear bomb also kills or wounds everyone in the area of effect. While AI weapons could carry nuclear, biological or chemical payloads and they could be set to kill everyone, this lack of discrimination and WMD nature is not inherent to autonomous weapons. In contrast, most proposed autonomous weapons seem intended to be precise and discriminating in their killing. After all, if the goal is mass destruction, there is already the well-established arsenal of biological, chemical and nuclear weapons. Terrorists, warlords and dictators often have no problems using WMDs already and AI weapons would not seem to significantly increase their capabilities.

In my next essay on this subject, I will argue in favor of AI weapons.

While there are many virtues (and vices) relevant to my philosophy of violence, the virtue of courage is central. In this context, the virtue of courage is the ability to regulate the emotion of fear so that you feel the right amount of fear at the right time, on the right occasion, towards the right cause, for the right purpose and in the right manner. Sorting out all these rights is challenging.

While cowardice tends to be condemned more than foolhardiness, both are vices. An excess of courage can lead a person to misjudge a situation and choose violence from overconfidence. But I am inclined to think that a deficiency of courage is the more dangerous vice, since fear distorts perception and judgment in ways that can lead to a person acting wrongly. As I found out in the “machete that wasn’t” incident, fear can enhance the misperception of objects, making a stick appear to be a machete or a phone look like a gun. This can cause a person to use what they think is justified violence as they protect themselves from, for example, a machete. Fear can also cause a person to misread other people and situations, such as perceiving innocent movements as dangerous. This can also cause people to use what they think is justified violence as they respond to someone they see as a clear threat. These are two of the many reasons why courage is an important virtue and why training for courage is a worthwhile endeavor.

Thanks to the problem of other minds I, unlike Bill Clinton, cannot feel your pain or your fear. I can only discuss my own internal experiences and infer, by analogy, that you have similar experiences. Based on my experiences, it seems that there are at least two modes of courage. The first is what I experienced in the “machete that wasn’t” incident and the second is the type I experience in the context of heights, such as flying.

When I (wrongly) perceived a person running at me with a machete, I felt a spike of fear. After that triggered a useful burst of adrenalin, the fear vanished and was replaced by cold, calm clarity. I was able to act with courage for the simple reason that I was no longer afraid. While getting a closer view of the “machete” allowed me to see it was just a big stick, the absence of fear no doubt also helped and allowed me to assess the situation more accurately. It also allowed me to act rationally rather than being driven by fear. This enabled me to speak to the person rather than simply attacking or endangering myself by fleeing in fear. I did not feel courageous and can best describe it as feeling utterly normal, as if I was still just running along peacefully or reading a book. Fortunately, I also did not feel fearless in the sense of being foolhardy and ready to engage in savage battle without care or concern. It, to go with my usual porridge metaphor, was just right—just what was needed to take the right action.

While I am a philosopher, I am interested in neuroscience, and I wonder what a brain scan at that moment would have revealed. While I was not aware of any fear (and hence, by definition, not feeling fear), perhaps those fear neurons were firing away as I ignored them. Which leads to the second mode of courage.

I am terrified of heights. While this might be understandable given that I had a ladder fail and suffered a quadriceps tendon tear, I was afraid of heights long before then. Getting on a ladder, being on a mountain, looking out the window of a tall building, and flying cause a welling of fear in my soul. I even feel it in video games. Even when my rationality tells me I am not in danger, I can feel the threat. Yes, I have tried various means of habituating myself to heights, but these have had no effect on my feeling fear. Last May, when I was flying home for my father’s funeral, I told the person (a retired NFL player) sitting beside me about my fear and he sensible asked me “Are you going to be a problem?” I assured him that I would not, and when he saw me showing no signs of concern during the takeoff, he relaxed his vigilance. That was when, of course, I struck. I am joking—otherwise you would have seen a YouTube video about an NFL player tackling a philosophy professor on a flight to Atlanta. While I appeared calm and acted normally, I was terrified the entire time—unlike my courage during a potential fight, my courage in the face of height manifests very differently. The fear is there the entire time. Sometimes it feels as if the fear is like a strong dog pulling on a leash, but I can keep it from running wild. Other times, it is like a wind I can feel, but one that has no power over me. The difference might be because they are different fears, because of a difference in my training, or perhaps I am far more afraid of heights than I am of fighting. Fortunately, the result is the same: I can act rationally as opposed to being driven about by fear. But I do find feeling fear more tiring than the absence of the feeling, although I can endure the fear of heights for hours (and I have yet to find my limit). I suspect that one difference is that my training in increases my confidence in dealing with potential violence, while there is no training I can do to counter the harms of falling from a great height. But I do admit, my fear of heights is excessive, even given the fact that a fall can easily injure or kill me. I am still trying to address this, although without success. But what about when people are trained to be afraid and then sent out on the streets with guns?

In the United States, there is a longstanding trend to train police to be warriors. While there are obvious concerns about seeing the police as warriors rather than those who protect and serve, Warrior-style police training teaches officers to feel, and act upon, fear by presenting the world as an extremely dangerous place where any interaction can kill them. Encouraging officers to view citizens as potential threats is likely to make them more afraid, especially if it is not countered by proper training in the virtues.

As discussed above and in the earlier essays, fear shapes perceptions in ways that increase the chances of unjustified and needless violence. An officer habituated to be afraid is more likely to see a phone as a gun and to interpret an innocent movement as a prelude to an attack. While officers should have a realistic view of dangers, their training should focus on habituating them to be masters of their fear rather than ruled by it. Unless, of course, the goal is to send frightened warriors out into the streets with the intention that they will be more likely to engage in violence. What makes matters even worse is the deluge of fear mongering and racism flowing forth from some media outlets and from some politicians. We, and the police are included in this, are told that minorities and migrants are a terrible threat, likely to engage in violence because they are members of gangs, physically dangerous and morally wicked. Anyone who is inundated with this is likely to have their fear increased, making it less likely they will act with courage—even if they wish to do so. While this is but one factor among many, it does help explain why some ICE agents and police officers use violence needlessly: they have been trained to be fearful warriors and deluged with a spew of terror towards the people they are interacting with. If the goal is for people to be needlessly and unjustly injured and killed, this all makes “sense.” But if we want protectors who serve the public good, we must change the training and the culture of fear propagated by the wicked.

In my previous essay I laid the groundwork for the discussion that is to follow about the anti-abortion  moral position and misogyny. As argued in that essay, a person can be anti-abortion and not a misogynist. It was also shown that attacking a person’s circumstances or consistency in regard to their professed belief in an anti-abortion  moral position does not disprove that position. It was, however, contended that consistency does matter when sorting out whether a person really does hold to an anti-abortion  position or is, in fact, using that as cover for misogyny.

Before Donald Trump, being openly misogynistic was generally a way to lose an election. As such, a clever (or cleverly managed) misogynist will endeavor to conceal his misogyny behind more laudable moral positions, such as professing to being pro-life. This, obviously, sells better than being anti-women.

 

Republicans typically profess that they are pro-life , but there is the question of whether they truly hold to this principle. Republicans are also regularly accused of being misogynists and part of this involves asserting that their anti-abortion stance is an anti-women stance. One way to sort this out is to consider whether a person acts consistently with their professed pro-life but not anti-women position. Since people are inconsistent though ignorance and moral weakness, this will not conclusively reveal the truth of the matter—but it is perhaps the best method of empirical investigation.

On the face of it, a pro-life position is the view that it is morally wrong to kill. If a person held to this principle consistently, then they would oppose all forms of killing and this would include hunting, killing animals for food, capital punishment, and killing in war. There are people who do hold to this view and are thus consistent. This view was taken very seriously by Christian thinkers such as St. Augustine and St. Aquinas. After all, as I say to my Ethics students, it would be a hell of a thing to go to Hell for eating a hamburger.

The pro-life  view that killing is wrong would seem to require a great deal of a person. In addition to being against just straight-up killing in war, abortion and capital punishment, it would also seem to require being against other things that kill people, such as poverty, pollution and disease. As such, a pro-life person would seem to be required to favor medical and social aid to fight things like disease and poverty that kill people.

As is obvious, there are many who profess being pro-life while opposing things that would reduce deaths. They even oppose such things as providing food support for mothers and infants who are mired in poverty. One might thus suspect that they are not so much pro-life as anti-woman. Of course, a person could be anti-abortion and still be opposed to society rendering aid to people to prevent death.

One option is to be against killing but be fine with letting people die. While philosophers do make this moral distinction, it seems a bit problematic for a person to claim that he opposes abortion because killing fetuses is wrong, but not providing aid and support to teenage mothers, the sick, and the starving is acceptable because one is just letting them die rather than killing them. Given this view, a “pro-life” person of this sort would be okay with a mother just abandoning her baby—she would simply be letting the baby die rather than killing her.

People who profess to be pro-life also often are morally onboard with killing and eating animals. The ethics of killing animals (and plants) was also addressed explicitly by Augustine and Aquinas. One way to be pro-life but hold that killing animals is acceptable is to contend that humans have a special moral status that other living things lack. The usual justification is that we are better than them, so we can kill (and eat) them. This view was held by St. Augustine and St. Anselm.

 However, embracing the superiority principle does provide an opening that can be used to justify abortion. One merely needs to argue that the fetus has a lower moral status than the woman and this would seem to warrant abortion.

Many people who profess a pro-life view also favor capital punishment and war. In fact, it is common to hear a politician smoothly switch from speaking of the sanctity of life to the need to kill terrorists and criminals. One way to be pro-life  and accept capital punishment and war is to argue that it is the killing of innocents that is wrong. Killing the non-innocent is fine.

The obvious problem is that capital punishment sometimes kills innocent people, and war always involves the death of innocents. If these killings are warranted in terms of interests, self-defense, or on utilitarian grounds, then the door is open for the same reasoning being applied to abortion. After all, if innocent adults and children can be killed for national security, economic interests or to protect us from terrorists, then fetuses can also be killed for the interests of the woman or on utilitarian grounds. Also, animals and plants are clearly innocent. Someone who is fine with killing people for the sake of interests or on utilitarian grounds yet professes to be devoutly pro-life might justifiably be suspected of being more anti-women than pro-life.

A professed pro-life position can also be interpreted as the moral principle that abortions should be prevented. This is, obviously, better described as anti-abortion rather than pro-life. One obvious way to prevent abortions is to prevent women from having them. This need not be a misogynistic view—one would need to consider why the person holds to this view and this can be explored by considering the person’s other expressed views on related matters.

If a person is anti-abortion, then she should presumably support ways to prevent abortion other than merely stopping women from having them. Two rather effective ways to reduce the number of abortions (and thus prevent some) are effective sex education and access to birth control. These significantly reduce the number of unwanted pregnancies and thus reduce the number of abortions. Not surprisingly, abstinence focused “sex education” fails dismally.

Being anti-abortion is rather like being anti-traffic fatality. Telling people to not drive will not really help. Teaching people how to drive safely and ensuring that protection is readily available does work quite well.

Because of this, if a person professes to be anti-abortion, yet is opposed to effective sex education and birth control, then it is reasonable to suspect misogyny. This is, of course, not conclusive: the person might have no dislike of women and sincerely believe that ignorance about sex is best, that abstinence works, and that birth control is evil. The person would not be a misogynist—just in error.

In closing, it must be reiterated that just because a person is inconsistent about their professed pro-life  moral principles, it does not follow that they must be a misogynist. After all, people are often inconsistent because of ignorance, because they fail to consider implications, and from moral weakness. However, if a person professes a pro-life  position, yet is consistently inconsistent in regards to their actions and other professed views, then it would not be unreasonable to consider that there might be some misogyny in play.

During ethical discussions about abortion, I am sometimes asked if I believe that person who holds the anti-abortion position must be a misogynist. While there are misogynists who are anti-abortion, I hold to the obvious: there is no necessary connection between being anti-abortion and being a misogynist. A misogynist hates women, while a person who holds an anti-abortion position believes that abortion is morally wrong. There is no inconsistency between holding the moral position that abortion is wrong and not being a hater of women. In fact, an anti-abortion person could have a benevolent view towards all living beings and be morally opposed to harming any of them, including zygotes and women.

While misogynists would tend to be anti-choice because of their hatred of women, they need not be anti-abortion . That is, hating women and wanting to deny them the choice to have an abortion does not entail that a person believes that abortion is morally wrong. For example, a misogynist could be fine with abortion (such as when it is convenient to him) but think that it should be up to the man to decide if or when a pregnancy is terminated. A misogynist might even be pro-choice for various reasons; but almost certainly not because he is a proponent of the rights of women.  As such, there is no necessary connection between the two views.

There is also the question of whether a pro-choice position is a cover for misogyny. The easy and obvious answer is that sometimes it is and sometimes it is not. Since it has been established that a person can be anti-abortion without being a misogynist, it follows that being anti-abortion need not be a cover for misogyny. However, it can provide cover for such a position. It is easier to sell the idea of restricting abortion by making a moral case against it than by expressing hatred of women and a desire to restrict their choices and reproductive option. Before progressing with the discussion, it is important to address two points.

The first point is that even if it is established that an anti-abortion person is a misogynist, this does not entail that the person’s position on the issue of abortion is mistaken. To reject a misogynist’s claims or arguments regarding abortion (or anything) on the grounds that they are a misogynist is to commit a circumstantial ad hominem.

This sort of Circumstantial ad Hominem involves substituting an attack on a person’s circumstances (such as the person’s religion, political affiliation, ethnic background, etc.) for reasons against her claim. This version has the following form:

 

Premise 1. Person A makes claim X.

Premise 2. Person B makes an attack on A’s circumstances.

Conclusion. Therefore X is false.

 

A Circumstantial ad Hominem is a fallacy because a person’s circumstances (religion, political affiliation, etc.) do not affect the truth or falsity of the claim. This is clear from following example: “Bill claims that 1+1 =2. But he is a Republican, so his claim is false.” As such, to assert that the anti-abortion position is in error because some misogynist holds that view would be an error in reasoning.

A second important point is that a person’s consistency or lack in terms of their principles or actions has no relevance to the truth of their claims or the strength of their arguments. To think otherwise is to fall victim to the ad hominem tu quoque fallacy. This fallacy is committed when it is concluded that a person’s claim is false because 1) it is inconsistent with something else a person has said or 2) what a person says is inconsistent with her actions. This type of “argument” has the following form:

 

Premise 1. Person A makes claim X.

Premise 2. Person B asserts that A’s actions or past claims are inconsistent with the truth of claim X.

Conclusion. Therefore, X is false.

 

The fact that a person makes inconsistent claims does not make any specific claim they make false (although of any pair of inconsistent claims only one can be true while both can be false). Also, the fact that a person’s claims are not consistent with their actions might indicate that the person is a hypocrite, but this does not prove their claims are false.

A person’s inconsistency also does not show that the person does not believe their avowed principle as they might be ignorant of its implications. That said, such inconsistency could be evidence of hypocrisy. While sorting out a person’s actual principles is not relevant to logical assessment of the person’s claims, doing so is relevant to many types of decision making regarding the person. One area where sorting out a person’s principles matters is voting. In the next essay, this matter will be addressed.

Although I like science fiction, it took me a long time to get around to seeing Interstellar—although time is a subjective sort of thing. One reason I decided to see it is because some claimed the movie should be shown in science classes. Because of this, I expected to see a science fiction movie. Since I write science fiction, horror and fantasy stuff, it should not be surprising that I get a bit obsessive about genre classifications. Since I am a professor, it should also not be surprising that I have an interest in teaching methods. As such, I will be considering Interstellar in regard to both genre classifications and its education value in the context of science. There will be spoilers—so if you have not seen it, you might wish to hold off reading this essay.

While there have been many attempts to distinguish between science and fantasy, Roger Zelazny presents one of the most brilliant and concise accounts in a dialogue between Yama and Tak in Lord of Light. Tak asks Yama about whether a creature, a Rakshasa, he has seen is a demon or not. Yama responds by saying, “If by ‘demon’ you mean a malefic, supernatural creature, possessed of great powers, life span and the ability to temporarily assume any shape — then the answer is no.  This is the generally accepted definition, but it is untrue in one respect. … It is not a supernatural creature.”

Tak, not surprisingly, does not see the importance of this single untruth in the definition. Yama replies with “Ah, but it makes a great deal of difference, you see.  It is the difference between the unknown and the unknowable, between science and fantasy — it is a matter of essence.  The four points of the compass be logic, knowledge, wisdom, and the unknown.  Some do bow in that final direction.  Others advance upon it.  To bow before the one is to lose sight of the three.  I may submit to the unknown, but never to the unknowable”

In Lord of Light, the Rakshasa play the role of demons, but they are the original inhabitants of a world conquered by human colonists. As such, they are natural creatures and fall under the domain of science. While I do not completely agree with Zelazny’s distinction, I find it appealing and reasonable enough to use as the foundation for the following discussion of the movie.

Interstellar initially stays within the realm of science-fiction by staying within the sphere of scientific speculation about hypersleep, wormholes and black holes. While the script does take some liberties with science, this is fine for the obvious reason that this is science fiction and not a science lecture. Interstellar also has the interesting bonus of having contributed to real science about the appearance of black holes. That aspect would provide some justification for showing it in a science class.

Another part of the movie that would be suitable for a science class are the scenes in which Murph thinks that her room might be haunted by a ghost. Cooper, her father, urges her to apply the scientific method to the phenomenon. Of course, it might be considered bad parenting for a parent to urge his child to study what might be a dangerous phenomenon in her room. Cooper also instantly dismisses the ghost hypothesis—which can be seen as being very scientific (since there has been no evidence of ghosts) to not very scientific (since this might be evidence of ghosts).

The story does include the point that the local school is denying that the moon-landings really occurred and the official textbooks support this view. Murph is punished at school for arguing that the moon landings did occur and is rewarded by Cooper. This does make a point about science denial and could thus be of use in the classroom. At least until the state decrees that the moon landings never happened.

Ironically, the story presents its own conspiracies and casts two of the main scientists (Brand and Mann) as liars. Brand lies about his failed equation for “good” reasons—to keep people working on a project that has a chance and to keep morale up. Mann lies about the habitability of his world because, despite being built up in the story as the best of the scientists, he cannot take the strain of being alone. As such, the movie sends a mixed message about conspiracies and lying scientists. While learning that some people are liars has value, this does not add to the movie’s value as a science class film. Now, to get back to science.

The science core of the movie, however, focuses on holes: the wormhole and the black hole. As noted above, the movie does stick within the realm of speculative science about the wormhole and the black hole—at least until near the end of the movie.

It turns out that all that is needed to fix Brand’s equation is data from inside a black hole. Conveniently, one is present. Also conveniently, Cooper and the cool robot TARS end up piloting their ships into the black hole as part of the plan to save Brand. It is at this point that the movie moves from science to fantasy.

Cooper and TARS manage to survive being dragged into the black hole, which might be scientifically fine. However, they are then rescued by the mysterious “they” (whoever created the wormhole and sent messages to NASA).

Cooper is transported into a tesseract or something. The way it works in the movie is that Cooper is floating “in” what seems to be a massive structure. In “reality” it is nifty blend of time and space—he can see and interact with all the temporal slices that occurred in Murph’s room. Crudely put, it allows him to move in time as if it were space. While it is also sort of still space. While this is rather weird, it is still within the realm of speculative science fiction.

Cooper is somehow able to interact with the room using weird movie plot rules—he can knock books off the shelves in a Morse code pattern, he can precisely change local gravity to provide the location of the NASA base in binary, and finally he can manipulate the hand of the watch he gave his daughter to convey the data needed to complete the equation. Weirdly, he cannot just manipulate a pen or pencil to write things out. But movies got to movie. While a bit absurd, this is still science fiction.

The main problem lies with the way Cooper solves the problem of locating Murph at the right time. While at this point, I would have bought the idea that he figured out the time scale of the room and could rapidly check it, the story has Cooper navigate through the vast time room using love as a “force” that can transcend time. While it is possible that Cooper is wrong about what he is really doing, the movie certainly presents it as if this love force is what serves as his temporal positioning system.

While love is a great thing, there are not even remotely scientific theories that provide a foundation for love having the qualities needed to enable such temporal navigation. There is, of course, scientific research into love and other emotions. The best of current love science indicates that love is a “mechanical” phenomena (in the philosophical sense) and there is nothing to even suggest that it provides what amounts to supernatural abilities.

It would, of course, be fine to have Cooper keep on trying because he loves his children—love does that. But making love into some sort of trans-dimensional force is clearly supernatural fantasy rather than science and certainly not suitable for a science lesson (well, other than to show what is not science).

One last concern I have with using the movie in a science class is the use of super beings. While the audience learns little of the beings, the movie indicates they can manipulate time and space. They create the wormhole, they pull Cooper and TARS from a black hole, they send Cooper back in time and enable him to communicate in stupid ways, and so on. The movie also tells the audience the beings are probably future humans (or what humanity becomes) and that they can “see” all of time. While the movie does not mention this, this is how St. Augustine saw God: He is outside of time. They are also benign and demonstrate they care about some individuals but not others. While they save Cooper and TARS, they also let many people die.

Given these qualities, it is easy to see these beings (or being) as playing the role of God or even being gods: super powerful, sometimes benign beings, that have incredible power over time and space. Yet they are fine with letting lots of people die needlessly while miraculously saving a person or two. For reasons.

Given the wormhole, it is easy to compare this movie to Star Trek: Deep Space Nine. This show had a wormhole populated by powerful beings that existed outside of our normal dimensions. To the people of Bajor, these beings were divine and supernatural Prophets. To Star Fleet, they were the wormhole aliens. While Star Trek is supposed to be science fiction, some episodes involving the prophets did blur the lines into fantasy, perhaps intentionally.

Getting back to Interstellar, it could be argued that the mysterious “they” are like the Rakshasa of Lord of Light: in that they (or whatever) have many of the attributes of God or gods but are not supernatural beings. Being fiction, this could be set by fiat, but this does raise the boundary question. To be specific, does saying that something that has what appear to be the usual supernatural powers is not supernatural make it science-fiction rather than fantasy? Answering this requires working out a proper theory of the boundary, which goes beyond the scope of this essay. However, I will note that having the day saved by the intervention of mysterious and almost divinely powerful beings does not seem to make the movie suitable for a science class. Rather, it makes it seem to be more of a fantasy story masquerading as science fiction.

My overall view is that showing parts of Interstellar, specifically the science parts, could be fine for a science class. However, the movie is more fantasy than science fiction.  

After Cecil the Lion was shot in 2015, the internet erupted in righteous fury against the killer. But some argued against feeling bad for Cecil, sometimes accusing the mourners of being phonies and pointing out that lions kill people. What caught my attention, however, was the use of a common rhetorical tactic—to “refute” those condemning Cecil’s killing by claiming the  “lion lovers” do not get equally upset about fetuses killed in abortions.

When HitchBOT was destroyed, in 2015, there was a similar response. When I have written about ethics and robots, I have been criticized on the same grounds: it has been claimed that I value robots more than fetuses. Presumably they think I have made an error in my arguments about robots. Since I find this tactic interesting and have been its target, I thought it would be worth my while examining it in a reasonable and fair way.

One way to look at this approach is to take it as an application of the Consistent Application method. A moral principle is consistently applied when it is applied in the same way to similar beings in similar circumstances. Inconsistent application is a problem because it violates three commonly accepted moral assumptions: equality, impartiality and relevant difference.

Equality is the assumption that moral equals must be treated as such. It also requires that those that are not morally equal be treated differently.  Impartiality is the assumption that moral principles must not be applied with undue bias. Inconsistent application would involve biased application.

Relevant difference is a common moral assumption. It is the view that different treatment must be justified by relevant differences. Sorting out which differences are relevant can involve controversy. For example, people disagree about whether gender is a relevant difference in how people should be treated.

Using the method of Consistent Application to criticize someone involves showing that a principle or standard has been applied differently in situations that are not relevantly different. This allows one to conclude that the application is inconsistent, which is generally regarded as a problem. The general form is as follows:

 

Step 1: Show that a principle/standard has been applied differently in situations that are not adequately different.

Step 2: Conclude that the principle has been applied inconsistently.

Step 3 (Optional): Insist that the principle be applied consistently.

 

Applying this method often requires determining the principle being used. Unfortunately, people are not often clear about their principles, even if they are operating in good faith. In general, people tend to just make moral assertions. In some cases, it is likely that people are not even aware of the principles they are appealing to when making moral claims.

Turning now to the cases of the lion, the HitchBOT and the fetus, this method could be applied as follows:

 

Step 1: Those who are outraged at the killing of the lion are using the principle that the killing of living things is wrong. Those outraged at the destruction of HitchBOT are using the principle that helpless things should not be destroyed. These people are not outraged by abortions in general and Planned Parenthood abortions in particular.

Step 2: The lion and HitchBOT mourners are not consistent in their application of the principle since fetuses are helpless (like HitchBOT) and living things (like Cecil the lion).

Step 3 (Optional): Those mourning for Cecil and HitchBOT should mourn for the fetuses and oppose abortion in general and Planned Parenthood in particular.

 

This sort of use of Consistent Application is appealing, and I routinely use the method myself. For example, I have argued (in a reverse of this situation) that people who are anti-abortion should also be anti-hunting and that people who are fine with hunting should also be morally okay with abortion.

As with any method of arguing, there are counter arguments. In the case of this method, there are three general reasonable responses to an effective use. The first is to admit the inconsistency and stop applying the principle in an inconsistent manner. This obviously does not defend against the charge but can be an honest reply. People, as might be imagined, rarely take this option.

A second way to reply (and an actual defense) is to dissolve the inconsistency by showing that the alleged inconsistency is merely apparent. One way to do this is by showing that there is a relevant difference (or differences). For example, someone who wants to morally oppose the shooting of Cecil while being morally tolerant of abortions could argue that an adult lion has a moral status different from a fetus. One common approach is to note the relation of the fetus to the woman and how a lion is an independent entity. The challenge lies in making a case for the relevance of the difference or differences.

A third way to reply is to reject the attributed principle. In the situation at hand, the assumption is that a person is against killing the lion simply because it is alive. However, that might not be the principle the person is, in fact, using. His principle might be based on the suffering of a conscious being and not on mere life. In this case, the person would be consistent in his application.

Naturally enough, the true principle is still subject to evaluation. For example, it could be argued the suffering principle is wrong and that the life principle should be accepted instead. In any case, this method is not an automatic “win.”

An alternative interpretation of this tactic is to regard it as an ad homimen: An ad Hominem is a general category of fallacies in which a claim or argument is rejected based on some irrelevant fact about the person presenting the claim or argument. Typically, this fallacy involves two steps. First, an attack against the character of person making the claim, her circumstances, or her actions is made. Second, this attack is taken to be evidence against the claim or argument the person in question is making (or presenting). This type of “argument” has the following form:

 

Premise 1. Person A makes claim X.

Premise 2. An irrelevant attack is made against Person A.

Conclusion:  Therefore, A’s claim is false.

 

The reason why an ad Hominem (of any kind) is a fallacy is that the character, circumstances, or actions of a person do not (in most cases) have a bearing on the truth or falsity of the claim being made (or the quality of the argument being made).

In the case of the lion, the HitchBOT and the fetus, the reasoning can be seen as follows:

 

Premise 1. Person A claims that killing Cecil was wrong or that destroying HitchBOT was wrong.

Premise 2. A does not condemn abortions in general or Planned Parenthood’s abortions.

Conclusion. Therefore, A is wrong about Cecil or HitchBOT.

 

Obviously enough, a person’s view of abortion does not prove or disprove her view about the ethics of the killing of Cecil or HitchBOT (although a person can, of course, be engaged in inconsistency or other errors—but these are different matters).

A third alternative is that the remarks are not meant as an argument and the point is to assert that lion lovers and bot buddies are awful people or, at best, misguided.

The gist of the tactic is, presumably, to make these people seem bad by presenting a contrast: “these lion lovers and bot buddies are broken up about lions and trashcans, but do not care about fetuses. What awful people they are.”

But moral concern is not a zero-sum game. That is, regarding the killing of Cecil as wrong and being upset about it does not entail that a person thus cares less (or not at all) about fetuses. After all, people do not just get a few “moral dollars” to spend, so that being concerned about one misdeed entails they cannot be concerned about another. A person can condemn the killing of Cecil and condemn abortion.

The obvious response is that there are people who condemned the killing of Cecil or the destruction of HitchBOT and are pro-choice. These people, it can be claimed, are morally awful. The obvious counter is that while it is easy to claim such people are morally awful, the challenge lies in showing that they are awful. That is, that their position on abortion is morally wrong. Noting that they are against lion killing or bot bashing and pro-choice does not show they are in error. Although, as noted above, they could be challenged on the grounds of consistency. But this requires laying out an argument rather than merely juxtaposing their views on these issues. This version of the tactic simply amounts to asserting or implying that there is something wrong with the person because one disagrees with that person. But a person who thinks that hunting lions or bashing bots is okay and that abortion is wrong, does not prove that the opposing view is in error. It just states the disagreement.

Since the principle of charity requires reconstructing and interpreting arguments in the best possible way, I endeavor to cast this sort of criticism as a Consistent Application attack rather than the other two. This approach is respectful and, most importantly, helps avoid creating a straw man of the opposition.

By Archie – https://www.flickr.com/photos/13898829@N04/15185941369/, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=37969841

Dr. Frauke Zeller and Dr. David Smith created HitchBOT (essentially a solar powered iPhone in an anthropomorphic shell) and sent him on trip to explore the USA on July 17, 2015. HitchBOT had successfully journey across Canada and Germany. The experiment was aimed at seeing how humans would interact with the “robot.”  He lasted about two weeks in the United States, meeting his violent end in Philadelphia.

The experiment was innovative and raised questions about what the fate of HitchBOT says about us. We do, of course, already know a great deal about us: we do awful things to each other, so it is hardly surprising that someone would do something awful to HitchBOT. People are killed every day in the United States, vandalism occurs regularly, and the theft of technology is routine. Thus it is no surprise that HitchBOT came to a bad end in the United States. In some ways, it was impressive that he made it as far as he did.

While HitchBOT met his untimely doom at the hands of someone awful, it is also worth remembering how well HitchBOT was treated. After all, he was essentially an iPhone in a shell that was being transported by random people.

One reason t HitchBOT was well treated because it fit into the travelling gnome tradition. For those not familiar with the travelling gnome prank, it involves “stealing” a lawn gnome and then sending the owner photographs of the gnome from various places. The gnome is then returned (at least by nice pranksters). HitchBOT was an elaborate version of the traveling gnome and, obviously, differs from the classic travelling gnome in that the owners sent HitchBOT on his fatal adventure. People, perhaps, responded negatively to the destruction of HitchBOT because it broke the rules of the travelling gnome game as the gnome is supposed to roam and make its way safely back home.

A second reason for HitchBOT’s positive adventures (and perhaps also his negative adventure) is that he became a minor internet celebrity. Since celebrity status, like moth dust, can rub off onto those who have close contact it is not surprising that people wanted to spend time with HitchBOT and post photos and videos of their adventures with the iPhone in a trash can. On the dark side, destroying something like HitchBOT could also be a way to gain some fame.

A third reason, which is more debatable, is that HitchBOT had a human shape, a cute name and a non-threatening appearance. These inclined people to react positively. Natural selection has probably favored humans that are generally friendly to other humans, and this presumably extends to things that resemble humans. There is probably also some biological hardwiring for liking cute things, which causes humans to generally like things like young creatures and cute stuffed animals. HitchBOT was also given a social media personality by those conducting the experiment which probably influenced people into feeling that it had a personality of its own. Seeing a busted up HitchBOT, which has an anthropomorphic form, presumably triggers a response similar too (but rather weaker than) what a sane human would have to seeing the busted up remains of a fellow human.

While some people were upset by the destruction of HitchBOT, others claimed that it was literally “a pile of trash that got what it deserved.” A more moderate position is that while it was unfortunate that HitchBOT was busted up, it is unreasonable to be overly concerned by this vandalism because HitchBOT was just an iPhone in a cheap shell. While it is fine to condemn the destruction as vandalism, theft and the wrecking of a fun experiment, it was unreasonable to see it as being important. After all, there were and always are more horrible things to be concerned about, such as the regular murder of humans.

My view is that the moderate position is reasonable: it is too bad HitchBOT was vandalized, but it was just an iPhone in a shell. As such, its destruction was not a matter of great concern. That said, the way HitchBOT was treated is still morally significant. In support of this, I turn to what has become my stock argument about the ethics of treating entities that lack  their moral status. This argument is stolen from Kant and is a modification of his argument regarding the treatment of animals.

Kant argues that we should treat animals well despite his view that animals have the same moral status as objects. Here is how he does it.

While Kant is not willing to accept that we have any direct duties to animals, he “smuggles” in duties to them indirectly. As he puts it, our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing X obligates us to that human, then an animal doing X would also create an analogous moral obligation. For example, a human who has long and faithfully served another person should not simply be abandoned or put to death when he has grown old. Likewise, a dog who has served faithfully and well should not be cast aside in their old age.

While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (that is, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to abandon or shoot the dog?

Kant’s answer seems consequentialist in character: he argues that if a person acts in inhumane ways towards animals (shooting the dog, for example) then his humanity will likely be damaged. Since, as Kant sees it, humans do have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act.

Kant discusses how people develop cruelty: they often begin with animals and then work up to harming human beings. As I point out to my students when I teach his theory, Kant seems to have anticipated the psychological devolution of serial killers.

Kant goes beyond merely enjoining us to not be cruel to animals and encourages us to be kind to them. He even praises Leibniz for being gentle with a worm he found on a leaf. Of course, he encourages this because those who are kind to animals will develop more humane feelings towards humans. So, roughly put, animals are moral practice for us: how we treat them is training us for how we will treat human beings.

Being an iPhone in a cheap shell, HitchBOT obviously had the moral status of an object and not that of a person. He did not feel or think, and the positive feelings people had towards it were due to its appearance (cute and vaguely human) and the way those running the experiment served as its personality via social media. It was, in many ways, a virtual person—or at least the manufactured illusion of a person.

Given the manufactured pseudo-personhood of HitchBOT, it could be taken as comparable to an animal, at least in Kant’s view. After all, for him animals are mere objects and have no moral status of their own. Of course, the same is also true of sticks and stones. Yet Kant would never argue that we should treat stones well. Thus, a key matter to settle is whether HitchBOT was more like an animal or more like a stone.

If Kant’s argument has merit, then the key concern about the treatment of non-rational beings is how it affects the behavior of the person engaging in the behavior. So, for example, if being cruel to a real dog could damage a person’s humanity, then he should (as Kant sees it) not be cruel to the dog.  This should also be extended to HitchBOT. For example, if engaging in certain activities with HitchBOT would damage a person’s humanity, then he should not act in that way. If engaging in certain behavior with HitchBOT would make a person more inclined to be kind to other rational beings, then the person should engage in that behavior.

It makes intuitive sense that being “nice” to the HitchBOT would help incline people to be somewhat nicer to others (much along the lines of how children are encouraged to play nicely with their stuffed animals). It also makes intuitive sense that being “mean” to HitchBOT would incline people to be somewhat less nice to others. Naturally, people would also tend to respond to HitchBOT based on whether they already tend to be nice or not. As such, it is reasonable to praise nice behavior towards HitchBOT and condemn bad behavior—after all, it was a surrogate for a person. But, obviously, not a person.

While HitchBOT presented a physical virtual person, current AI is presenting digital virtual people, albeit vastly more complex than HitchBOT. However, the lessons of HitchBOT should apply to AI as well.

 

One stock criticism of philosophers is that we are useless: we address useless subjects or address useful subjects in useless ways. For example, one might criticize a philosopher for philosophically discussing matters of what might be. To illustrate, a philosopher might discuss the ethics of modifying animals to possess human levels of intelligence. As another illustration, a philosopher might present an essay on the problem of personal identity as it relates to cybernetic replacement of the human body. In general terms, these speculative flights can be dismissed as doubly useless: not only do they have the standard uselessness of philosophy, but they also have the uselessness of talking about what is not and might never be. Since I have, at length and elsewhere, addressed the general charge of uselessness against philosophy, I will focus on this specific criticism.

One version of this criticism focuses on the practical: since the shape of what might be cannot be known, philosophical discussions about such things involve double speculation: the first about what might be and the second the usual philosophical speculation. While the exact mathematics of the speculation (is it additive or exponential?) is uncertain, it can be argued that such speculation about speculation has little value. And this assumes that philosophy has value and speculation about the future has value (both of which can be doubted).

This sort of criticism is often used as the foundation for a second sort of criticism. This criticism assumes philosophy has value and this assumption provides a foundation for the criticism. The criticism is that philosophical speculation about what might be uses up resources that could be used to apply philosophy to existing problems. Naturally, someone who regards philosophy as useless would regard philosophical discussion about what might be as also being a waste of time. Responding to this view would require a general defense of philosophy and this goes beyond the scope of this short essay. Now, to return to the matter at hand.

As an example, a discussion of the ethics of using autonomous, intelligent weapon systems in war could be criticized on the grounds that the discussion should focus on the ethical problems of current warfare. After all, there is a multitude of unsolved moral problems about existing warfare and there hardly seems any need to add more unsolved problems.

This does have considerable appeal. If a person has not completed the work in the course she is taking now, it does not make sense for her to spend her time trying to complete the work that might be assigned four semesters from now. To use another analogy, if a person has a hole in her roof, it would not be reasonable to spend time speculating about what sort of force-field roof technology they might have in the future. This is, of course, the classic “don’t you have something better to do?” problem.

As might be suspected, this criticism rests on the principle that resources should be spent effectively, and less effective uses of resources are subject to criticism. As the analogies given above show, using resources effectively is reasonable and ineffective use can be justly criticized. However, there is an obvious concern with this principle: to be consistent in its application it would need to be applied across the board so that a person is applying all her resources with proper utility. For example, a person who prepares a fancy meal when she could be working on addressing the problems presented by poverty is wasting her time. She could just prepare a quick meal sufficient to provide the nutrition she needs. As another example, a person who is reading a book for enjoyment should be out addressing the threat posed by terrorist groups. As a third example, someone who is developing yet another likely-to-fail social media company should be spending her time addressing prison reform. And so on. In fact, for almost anything a person might be doing, there will be something better she could be doing.

As others have argued, this sort of maximization would be counterproductive: a person would exhaust herself and her resources, thus (ironically) doing more harm than good. As such, the “don’t you have something better to do?” criticism should be used with due care. That said, it can be fair criticism if a person really does have something better to do and what she is doing instead is detrimental enough to warrant correction.

In the case of philosophical discussions about what might be, it can almost always be argued that while a person could be doing something better (such as addressing current problems), such speculation is usually harmless. That is, it is unlikely that the person would have solved the problem of war, poverty or crime if only she had not been writing about ethics and cyborgs. Of course, this just defends such discussion in the same way one might defend any other harmless amusement, such as playing a game or watching a sunset. It would be preferable to have a better defense of such philosophical discussions of the shape of things (that might be) to come.

A reasonable defense of such discussions can be based on the plausible notion that it is better to address a problem before it occurs than after it arrives. To use the classic analogy, it is much easier to stop a rolling snowball than the avalanche it could cause.

In the case of speculative matters that have ethical aspects, it seems that it would be useful to already have moral discussions in place. This would provide the practical advantage of already having a framework and context in which to discuss the matter when (or if) it becomes a reality. One excellent illustration of this is the driverless car that is always going to be a reality next year. It is a good idea to work out the ethics of how the car should be programmed when it must “decide” what to hit and what to avoid when an accident threatens. Another illustration is developing the moral guidelines for ever more sophisticated automated weapon systems.  Since these are being developed at a rapid pace, what were once theoretical problems will soon be actual moral problems. As a final example, consider the moral concerns governing modifying and augmenting humans using technology and genetic modification. It is a good idea to have some moral guidance going into this brave new world rather than scrambling with the ethics after the fact.

Philosophers also like to discuss what might be in other contexts than ethics. Not surprisingly, the realm of what might be is rich ground for discussions of metaphysics and epistemology. While these fields are often considered the most useless aspects of philosophy, they have rather practical implications that matter, even (or even especially) in regards to speculation about what might be.

To illustrate this, consider the research being conducted in repairing, augmenting and preserving the human mind (or brain, if one prefers). One classic problem in metaphysics is the problem of personal identity: what is it to be a person, what is it to be distinct from all other things, and what is it to be that person across time.  While this might seem to be a purely theoretical concern, it quickly becomes a practical concern when one is discussing this technology.

For example, imagine a company that offers a special sort of life insurance: they claim they can back-up a person to a storage system and, upon the death of the original body, restore the back-up to a cloned (or robotic) body. While the question of whether that restored backup would be you or not is clearly a metaphysical question of personal identity, it is also a very practical question. After all, paying to ensure that you survive your bodily death is very different from paying so that someone who thinks they are you can go to your house and make out with your spouse after you are dead.

There are, of course, numerous other examples that can be used to illustrate the value of such speculation of what might be. In fact, I have already written many of these in previous essays In light of the above discussion, it seems reasonable to accept that philosophical discussions about what might be need not be a waste of time. In fact, such discussions can be useful in a very practical sense.

 

Donald gazed down upon the gleaming city of Newer York and the equally gleaming citizens that walked, rolled, or flew its gleaming streets. Long ago, or so the oldest files in his memory indicated, he had been an organic human. That human, whom Donald regarded as himself, had also gazed down upon the city, then known as New York. In those dark days, primates walked and drove the dirty streets and the only things that gleamed were puddles of urine.

Donald’s thoughts drifted back to the flesh-time, when his body had been a skin-bag holding an array of organs that were always one mischance away from failure. Gazing upon his polymer outer shell and checking a report on his internal systems, he reflected on how much better things are now. Then, he faced the constant risk of death. Now he could expect to exist until the universe grew cold. Or hot. Or exploded. Or whatever it is that universes do when they die.

But he could not help but be haunted by a class he had taken long ago. The professor had talked about the ship of Theseus and identity. How much of the original could be replaced before it lost identity and ceased to be? Fortunately, his mood regulation systems caught the feeling of distress and promptly corrected the problem, encrypted that file and flagged it as forgotten.

Donald returned to gazing upon the magnificent city, pleased that the flesh-time had ended during his lifetime. He did not even wonder where Donald’s bones were, that thought having been flagged as distressing long ago.

 

While the classic AI apocalypse ends humanity with a bang, the end might be a whisper, a gradual replacement rather than extermination. For some, this quiet end could be worse: no epic battle in which humanity goes out guns ablaze and head held high in defiance. Rather, humanity would simply fade away, rather like a superfluous worker or obsolete printer.

There are various ways such scenarios could occur. One, which occasionally appears in science fiction, is that humans decline because being in a robot-dependent society saps us of what it takes to remain the top species. This is similar to what some conservatives claim about government-dependence, namely that it will weaken people. Of course, the conservative claim is that such dependence will result in more reproduction, rather than less and in the science fiction stories human reproduction slows and eventually stops. The human race quietly ends, leaving behind the machines.

Alternatively, humans become so dependent on their robots that when the robots fail, they can no longer take care of themselves and thus perish. Some tales do have happier endings: a few humans survive the collapse, and the human race gets another chance.

Fortunately, we can avoid such quiet apocalypses. One is to simply not create such a dependent society. Another option is to have a safety system for protecting against collapse. This might involve maintaining skills that would be needed in the event of a collapse or, perhaps, having some human volunteers who live outside of the main technological society and who will be ready to keep humanity going. These ideas could make for some potentially interesting science fiction stories.

Another, perhaps more interesting and insidious, scenario is that humans replace themselves with machines. While it has long been a plot device in science-fiction, there are people in the actual world who are eagerly awaiting (or even trying to bring about) the merging of humans and machines.

While the technology of today is limited, the foundations of such a future is being built. For example, modern prosthetic replacements are usually relatively crude, but it is a matter of time before they are as good as or better than the organic originals. As another example, work is being done on augmenting organic brains with implants for memory and skills. While these are unimpressive now, there is a promise of things to come. These might include such things as storing memories in implanted “drives” and loading skills or personalities into one’s brain.

These and other technologies point towards a cyberpunk future: full replacements of organic bodies with machine bodies. Someday people with suitable insurance or funds could have their brains (and perhaps some of their glands) placed within a replacement body, one that is far more resistant to damage and the ravages of time than the original meat package.

The next logical step is, obviously enough, the replacement of the mortal and vulnerable brain with something better. This replacement will probably be a ship of Theseus scenario: as parts of the original organic brain begin to weaken and fail, they could gradually be replaced with technology. Some will also elect to do more than replace damaged or failed parts and will want augmentations added to the brain, such as improved memory or cognitive enhancements.

Since the human brain is mortal, it will fail over time. Like the ship of Theseus beloved by philosophers, eventually the original will be completely replaced. Laying aside the philosophical question of whether the same person will remain, there is the clear and indisputable fact that what remains will not be homo sapiens, because nothing organic will remain.

Should all humans undergo this transformation that will be the end of us as a biological species and the AI apocalypse will be complete. To use a rough analogy, the machine replacements of homo sapiens will be like the fossilization of dinosaurs: what remains has some interesting connection to the originals, but the species are extinct. One important difference is that our fossils would still be moving around and might think that they are us.

It could be said that humanity would still remain: the machines that replaced the organic homo sapiens would be human, just not organic humans. The obvious challenge is presenting a convincing argument that such entities would be human in a meaningful way. Perhaps inheriting our human cultures, values and so on would suffice because being human is not a matter of being a certain sort of organism. However, as noted above, they would obviously no longer be homo sapiens, that species would have been replaced in the gradual and quiet AI apocalypse.