During ethical discussions about abortion, I am sometimes asked if I believe that person who holds the anti-abortion position must be a misogynist. While there are misogynists who are anti-abortion, I hold to the obvious: there is no necessary connection between being anti-abortion and being a misogynist. A misogynist hates women, while a person who holds an anti-abortion position believes that abortion is morally wrong. There is no inconsistency between holding the moral position that abortion is wrong and not being a hater of women. In fact, an anti-abortion person could have a benevolent view towards all living beings and be morally opposed to harming any of them, including zygotes and women.

While misogynists would tend to be anti-choice because of their hatred of women, they need not be anti-abortion . That is, hating women and wanting to deny them the choice to have an abortion does not entail that a person believes that abortion is morally wrong. For example, a misogynist could be fine with abortion (such as when it is convenient to him) but think that it should be up to the man to decide if or when a pregnancy is terminated. A misogynist might even be pro-choice for various reasons; but almost certainly not because he is a proponent of the rights of women.  As such, there is no necessary connection between the two views.

There is also the question of whether a pro-choice position is a cover for misogyny. The easy and obvious answer is that sometimes it is and sometimes it is not. Since it has been established that a person can be anti-abortion without being a misogynist, it follows that being anti-abortion need not be a cover for misogyny. However, it can provide cover for such a position. It is easier to sell the idea of restricting abortion by making a moral case against it than by expressing hatred of women and a desire to restrict their choices and reproductive option. Before progressing with the discussion, it is important to address two points.

The first point is that even if it is established that an anti-abortion person is a misogynist, this does not entail that the person’s position on the issue of abortion is mistaken. To reject a misogynist’s claims or arguments regarding abortion (or anything) on the grounds that they are a misogynist is to commit a circumstantial ad hominem.

This sort of Circumstantial ad Hominem involves substituting an attack on a person’s circumstances (such as the person’s religion, political affiliation, ethnic background, etc.) for reasons against her claim. This version has the following form:

 

Premise 1. Person A makes claim X.

Premise 2. Person B makes an attack on A’s circumstances.

Conclusion. Therefore X is false.

 

A Circumstantial ad Hominem is a fallacy because a person’s circumstances (religion, political affiliation, etc.) do not affect the truth or falsity of the claim. This is clear from following example: “Bill claims that 1+1 =2. But he is a Republican, so his claim is false.” As such, to assert that the anti-abortion position is in error because some misogynist holds that view would be an error in reasoning.

A second important point is that a person’s consistency or lack in terms of their principles or actions has no relevance to the truth of their claims or the strength of their arguments. To think otherwise is to fall victim to the ad hominem tu quoque fallacy. This fallacy is committed when it is concluded that a person’s claim is false because 1) it is inconsistent with something else a person has said or 2) what a person says is inconsistent with her actions. This type of “argument” has the following form:

 

Premise 1. Person A makes claim X.

Premise 2. Person B asserts that A’s actions or past claims are inconsistent with the truth of claim X.

Conclusion. Therefore, X is false.

 

The fact that a person makes inconsistent claims does not make any specific claim they make false (although of any pair of inconsistent claims only one can be true while both can be false). Also, the fact that a person’s claims are not consistent with their actions might indicate that the person is a hypocrite, but this does not prove their claims are false.

A person’s inconsistency also does not show that the person does not believe their avowed principle as they might be ignorant of its implications. That said, such inconsistency could be evidence of hypocrisy. While sorting out a person’s actual principles is not relevant to logical assessment of the person’s claims, doing so is relevant to many types of decision making regarding the person. One area where sorting out a person’s principles matters is voting. In the next essay, this matter will be addressed.

After Cecil the Lion was shot in 2015, the internet erupted in righteous fury against the killer. But some argued against feeling bad for Cecil, sometimes accusing the mourners of being phonies and pointing out that lions kill people. What caught my attention, however, was the use of a common rhetorical tactic—to “refute” those condemning Cecil’s killing by claiming the  “lion lovers” do not get equally upset about fetuses killed in abortions.

When HitchBOT was destroyed, in 2015, there was a similar response. When I have written about ethics and robots, I have been criticized on the same grounds: it has been claimed that I value robots more than fetuses. Presumably they think I have made an error in my arguments about robots. Since I find this tactic interesting and have been its target, I thought it would be worth my while examining it in a reasonable and fair way.

One way to look at this approach is to take it as an application of the Consistent Application method. A moral principle is consistently applied when it is applied in the same way to similar beings in similar circumstances. Inconsistent application is a problem because it violates three commonly accepted moral assumptions: equality, impartiality and relevant difference.

Equality is the assumption that moral equals must be treated as such. It also requires that those that are not morally equal be treated differently.  Impartiality is the assumption that moral principles must not be applied with undue bias. Inconsistent application would involve biased application.

Relevant difference is a common moral assumption. It is the view that different treatment must be justified by relevant differences. Sorting out which differences are relevant can involve controversy. For example, people disagree about whether gender is a relevant difference in how people should be treated.

Using the method of Consistent Application to criticize someone involves showing that a principle or standard has been applied differently in situations that are not relevantly different. This allows one to conclude that the application is inconsistent, which is generally regarded as a problem. The general form is as follows:

 

Step 1: Show that a principle/standard has been applied differently in situations that are not adequately different.

Step 2: Conclude that the principle has been applied inconsistently.

Step 3 (Optional): Insist that the principle be applied consistently.

 

Applying this method often requires determining the principle being used. Unfortunately, people are not often clear about their principles, even if they are operating in good faith. In general, people tend to just make moral assertions. In some cases, it is likely that people are not even aware of the principles they are appealing to when making moral claims.

Turning now to the cases of the lion, the HitchBOT and the fetus, this method could be applied as follows:

 

Step 1: Those who are outraged at the killing of the lion are using the principle that the killing of living things is wrong. Those outraged at the destruction of HitchBOT are using the principle that helpless things should not be destroyed. These people are not outraged by abortions in general and Planned Parenthood abortions in particular.

Step 2: The lion and HitchBOT mourners are not consistent in their application of the principle since fetuses are helpless (like HitchBOT) and living things (like Cecil the lion).

Step 3 (Optional): Those mourning for Cecil and HitchBOT should mourn for the fetuses and oppose abortion in general and Planned Parenthood in particular.

 

This sort of use of Consistent Application is appealing, and I routinely use the method myself. For example, I have argued (in a reverse of this situation) that people who are anti-abortion should also be anti-hunting and that people who are fine with hunting should also be morally okay with abortion.

As with any method of arguing, there are counter arguments. In the case of this method, there are three general reasonable responses to an effective use. The first is to admit the inconsistency and stop applying the principle in an inconsistent manner. This obviously does not defend against the charge but can be an honest reply. People, as might be imagined, rarely take this option.

A second way to reply (and an actual defense) is to dissolve the inconsistency by showing that the alleged inconsistency is merely apparent. One way to do this is by showing that there is a relevant difference (or differences). For example, someone who wants to morally oppose the shooting of Cecil while being morally tolerant of abortions could argue that an adult lion has a moral status different from a fetus. One common approach is to note the relation of the fetus to the woman and how a lion is an independent entity. The challenge lies in making a case for the relevance of the difference or differences.

A third way to reply is to reject the attributed principle. In the situation at hand, the assumption is that a person is against killing the lion simply because it is alive. However, that might not be the principle the person is, in fact, using. His principle might be based on the suffering of a conscious being and not on mere life. In this case, the person would be consistent in his application.

Naturally enough, the true principle is still subject to evaluation. For example, it could be argued the suffering principle is wrong and that the life principle should be accepted instead. In any case, this method is not an automatic “win.”

An alternative interpretation of this tactic is to regard it as an ad homimen: An ad Hominem is a general category of fallacies in which a claim or argument is rejected based on some irrelevant fact about the person presenting the claim or argument. Typically, this fallacy involves two steps. First, an attack against the character of person making the claim, her circumstances, or her actions is made. Second, this attack is taken to be evidence against the claim or argument the person in question is making (or presenting). This type of “argument” has the following form:

 

Premise 1. Person A makes claim X.

Premise 2. An irrelevant attack is made against Person A.

Conclusion:  Therefore, A’s claim is false.

 

The reason why an ad Hominem (of any kind) is a fallacy is that the character, circumstances, or actions of a person do not (in most cases) have a bearing on the truth or falsity of the claim being made (or the quality of the argument being made).

In the case of the lion, the HitchBOT and the fetus, the reasoning can be seen as follows:

 

Premise 1. Person A claims that killing Cecil was wrong or that destroying HitchBOT was wrong.

Premise 2. A does not condemn abortions in general or Planned Parenthood’s abortions.

Conclusion. Therefore, A is wrong about Cecil or HitchBOT.

 

Obviously enough, a person’s view of abortion does not prove or disprove her view about the ethics of the killing of Cecil or HitchBOT (although a person can, of course, be engaged in inconsistency or other errors—but these are different matters).

A third alternative is that the remarks are not meant as an argument and the point is to assert that lion lovers and bot buddies are awful people or, at best, misguided.

The gist of the tactic is, presumably, to make these people seem bad by presenting a contrast: “these lion lovers and bot buddies are broken up about lions and trashcans, but do not care about fetuses. What awful people they are.”

But moral concern is not a zero-sum game. That is, regarding the killing of Cecil as wrong and being upset about it does not entail that a person thus cares less (or not at all) about fetuses. After all, people do not just get a few “moral dollars” to spend, so that being concerned about one misdeed entails they cannot be concerned about another. A person can condemn the killing of Cecil and condemn abortion.

The obvious response is that there are people who condemned the killing of Cecil or the destruction of HitchBOT and are pro-choice. These people, it can be claimed, are morally awful. The obvious counter is that while it is easy to claim such people are morally awful, the challenge lies in showing that they are awful. That is, that their position on abortion is morally wrong. Noting that they are against lion killing or bot bashing and pro-choice does not show they are in error. Although, as noted above, they could be challenged on the grounds of consistency. But this requires laying out an argument rather than merely juxtaposing their views on these issues. This version of the tactic simply amounts to asserting or implying that there is something wrong with the person because one disagrees with that person. But a person who thinks that hunting lions or bashing bots is okay and that abortion is wrong, does not prove that the opposing view is in error. It just states the disagreement.

Since the principle of charity requires reconstructing and interpreting arguments in the best possible way, I endeavor to cast this sort of criticism as a Consistent Application attack rather than the other two. This approach is respectful and, most importantly, helps avoid creating a straw man of the opposition.

By Archie – https://www.flickr.com/photos/13898829@N04/15185941369/, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=37969841

Dr. Frauke Zeller and Dr. David Smith created HitchBOT (essentially a solar powered iPhone in an anthropomorphic shell) and sent him on trip to explore the USA on July 17, 2015. HitchBOT had successfully journey across Canada and Germany. The experiment was aimed at seeing how humans would interact with the “robot.”  He lasted about two weeks in the United States, meeting his violent end in Philadelphia.

The experiment was innovative and raised questions about what the fate of HitchBOT says about us. We do, of course, already know a great deal about us: we do awful things to each other, so it is hardly surprising that someone would do something awful to HitchBOT. People are killed every day in the United States, vandalism occurs regularly, and the theft of technology is routine. Thus it is no surprise that HitchBOT came to a bad end in the United States. In some ways, it was impressive that he made it as far as he did.

While HitchBOT met his untimely doom at the hands of someone awful, it is also worth remembering how well HitchBOT was treated. After all, he was essentially an iPhone in a shell that was being transported by random people.

One reason t HitchBOT was well treated because it fit into the travelling gnome tradition. For those not familiar with the travelling gnome prank, it involves “stealing” a lawn gnome and then sending the owner photographs of the gnome from various places. The gnome is then returned (at least by nice pranksters). HitchBOT was an elaborate version of the traveling gnome and, obviously, differs from the classic travelling gnome in that the owners sent HitchBOT on his fatal adventure. People, perhaps, responded negatively to the destruction of HitchBOT because it broke the rules of the travelling gnome game as the gnome is supposed to roam and make its way safely back home.

A second reason for HitchBOT’s positive adventures (and perhaps also his negative adventure) is that he became a minor internet celebrity. Since celebrity status, like moth dust, can rub off onto those who have close contact it is not surprising that people wanted to spend time with HitchBOT and post photos and videos of their adventures with the iPhone in a trash can. On the dark side, destroying something like HitchBOT could also be a way to gain some fame.

A third reason, which is more debatable, is that HitchBOT had a human shape, a cute name and a non-threatening appearance. These inclined people to react positively. Natural selection has probably favored humans that are generally friendly to other humans, and this presumably extends to things that resemble humans. There is probably also some biological hardwiring for liking cute things, which causes humans to generally like things like young creatures and cute stuffed animals. HitchBOT was also given a social media personality by those conducting the experiment which probably influenced people into feeling that it had a personality of its own. Seeing a busted up HitchBOT, which has an anthropomorphic form, presumably triggers a response similar too (but rather weaker than) what a sane human would have to seeing the busted up remains of a fellow human.

While some people were upset by the destruction of HitchBOT, others claimed that it was literally “a pile of trash that got what it deserved.” A more moderate position is that while it was unfortunate that HitchBOT was busted up, it is unreasonable to be overly concerned by this vandalism because HitchBOT was just an iPhone in a cheap shell. While it is fine to condemn the destruction as vandalism, theft and the wrecking of a fun experiment, it was unreasonable to see it as being important. After all, there were and always are more horrible things to be concerned about, such as the regular murder of humans.

My view is that the moderate position is reasonable: it is too bad HitchBOT was vandalized, but it was just an iPhone in a shell. As such, its destruction was not a matter of great concern. That said, the way HitchBOT was treated is still morally significant. In support of this, I turn to what has become my stock argument about the ethics of treating entities that lack  their moral status. This argument is stolen from Kant and is a modification of his argument regarding the treatment of animals.

Kant argues that we should treat animals well despite his view that animals have the same moral status as objects. Here is how he does it.

While Kant is not willing to accept that we have any direct duties to animals, he “smuggles” in duties to them indirectly. As he puts it, our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing X obligates us to that human, then an animal doing X would also create an analogous moral obligation. For example, a human who has long and faithfully served another person should not simply be abandoned or put to death when he has grown old. Likewise, a dog who has served faithfully and well should not be cast aside in their old age.

While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (that is, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to abandon or shoot the dog?

Kant’s answer seems consequentialist in character: he argues that if a person acts in inhumane ways towards animals (shooting the dog, for example) then his humanity will likely be damaged. Since, as Kant sees it, humans do have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act.

Kant discusses how people develop cruelty: they often begin with animals and then work up to harming human beings. As I point out to my students when I teach his theory, Kant seems to have anticipated the psychological devolution of serial killers.

Kant goes beyond merely enjoining us to not be cruel to animals and encourages us to be kind to them. He even praises Leibniz for being gentle with a worm he found on a leaf. Of course, he encourages this because those who are kind to animals will develop more humane feelings towards humans. So, roughly put, animals are moral practice for us: how we treat them is training us for how we will treat human beings.

Being an iPhone in a cheap shell, HitchBOT obviously had the moral status of an object and not that of a person. He did not feel or think, and the positive feelings people had towards it were due to its appearance (cute and vaguely human) and the way those running the experiment served as its personality via social media. It was, in many ways, a virtual person—or at least the manufactured illusion of a person.

Given the manufactured pseudo-personhood of HitchBOT, it could be taken as comparable to an animal, at least in Kant’s view. After all, for him animals are mere objects and have no moral status of their own. Of course, the same is also true of sticks and stones. Yet Kant would never argue that we should treat stones well. Thus, a key matter to settle is whether HitchBOT was more like an animal or more like a stone.

If Kant’s argument has merit, then the key concern about the treatment of non-rational beings is how it affects the behavior of the person engaging in the behavior. So, for example, if being cruel to a real dog could damage a person’s humanity, then he should (as Kant sees it) not be cruel to the dog.  This should also be extended to HitchBOT. For example, if engaging in certain activities with HitchBOT would damage a person’s humanity, then he should not act in that way. If engaging in certain behavior with HitchBOT would make a person more inclined to be kind to other rational beings, then the person should engage in that behavior.

It makes intuitive sense that being “nice” to the HitchBOT would help incline people to be somewhat nicer to others (much along the lines of how children are encouraged to play nicely with their stuffed animals). It also makes intuitive sense that being “mean” to HitchBOT would incline people to be somewhat less nice to others. Naturally, people would also tend to respond to HitchBOT based on whether they already tend to be nice or not. As such, it is reasonable to praise nice behavior towards HitchBOT and condemn bad behavior—after all, it was a surrogate for a person. But, obviously, not a person.

While HitchBOT presented a physical virtual person, current AI is presenting digital virtual people, albeit vastly more complex than HitchBOT. However, the lessons of HitchBOT should apply to AI as well.

 

One stock criticism of philosophers is that we are useless: we address useless subjects or address useful subjects in useless ways. For example, one might criticize a philosopher for philosophically discussing matters of what might be. To illustrate, a philosopher might discuss the ethics of modifying animals to possess human levels of intelligence. As another illustration, a philosopher might present an essay on the problem of personal identity as it relates to cybernetic replacement of the human body. In general terms, these speculative flights can be dismissed as doubly useless: not only do they have the standard uselessness of philosophy, but they also have the uselessness of talking about what is not and might never be. Since I have, at length and elsewhere, addressed the general charge of uselessness against philosophy, I will focus on this specific criticism.

One version of this criticism focuses on the practical: since the shape of what might be cannot be known, philosophical discussions about such things involve double speculation: the first about what might be and the second the usual philosophical speculation. While the exact mathematics of the speculation (is it additive or exponential?) is uncertain, it can be argued that such speculation about speculation has little value. And this assumes that philosophy has value and speculation about the future has value (both of which can be doubted).

This sort of criticism is often used as the foundation for a second sort of criticism. This criticism assumes philosophy has value and this assumption provides a foundation for the criticism. The criticism is that philosophical speculation about what might be uses up resources that could be used to apply philosophy to existing problems. Naturally, someone who regards philosophy as useless would regard philosophical discussion about what might be as also being a waste of time. Responding to this view would require a general defense of philosophy and this goes beyond the scope of this short essay. Now, to return to the matter at hand.

As an example, a discussion of the ethics of using autonomous, intelligent weapon systems in war could be criticized on the grounds that the discussion should focus on the ethical problems of current warfare. After all, there is a multitude of unsolved moral problems about existing warfare and there hardly seems any need to add more unsolved problems.

This does have considerable appeal. If a person has not completed the work in the course she is taking now, it does not make sense for her to spend her time trying to complete the work that might be assigned four semesters from now. To use another analogy, if a person has a hole in her roof, it would not be reasonable to spend time speculating about what sort of force-field roof technology they might have in the future. This is, of course, the classic “don’t you have something better to do?” problem.

As might be suspected, this criticism rests on the principle that resources should be spent effectively, and less effective uses of resources are subject to criticism. As the analogies given above show, using resources effectively is reasonable and ineffective use can be justly criticized. However, there is an obvious concern with this principle: to be consistent in its application it would need to be applied across the board so that a person is applying all her resources with proper utility. For example, a person who prepares a fancy meal when she could be working on addressing the problems presented by poverty is wasting her time. She could just prepare a quick meal sufficient to provide the nutrition she needs. As another example, a person who is reading a book for enjoyment should be out addressing the threat posed by terrorist groups. As a third example, someone who is developing yet another likely-to-fail social media company should be spending her time addressing prison reform. And so on. In fact, for almost anything a person might be doing, there will be something better she could be doing.

As others have argued, this sort of maximization would be counterproductive: a person would exhaust herself and her resources, thus (ironically) doing more harm than good. As such, the “don’t you have something better to do?” criticism should be used with due care. That said, it can be fair criticism if a person really does have something better to do and what she is doing instead is detrimental enough to warrant correction.

In the case of philosophical discussions about what might be, it can almost always be argued that while a person could be doing something better (such as addressing current problems), such speculation is usually harmless. That is, it is unlikely that the person would have solved the problem of war, poverty or crime if only she had not been writing about ethics and cyborgs. Of course, this just defends such discussion in the same way one might defend any other harmless amusement, such as playing a game or watching a sunset. It would be preferable to have a better defense of such philosophical discussions of the shape of things (that might be) to come.

A reasonable defense of such discussions can be based on the plausible notion that it is better to address a problem before it occurs than after it arrives. To use the classic analogy, it is much easier to stop a rolling snowball than the avalanche it could cause.

In the case of speculative matters that have ethical aspects, it seems that it would be useful to already have moral discussions in place. This would provide the practical advantage of already having a framework and context in which to discuss the matter when (or if) it becomes a reality. One excellent illustration of this is the driverless car that is always going to be a reality next year. It is a good idea to work out the ethics of how the car should be programmed when it must “decide” what to hit and what to avoid when an accident threatens. Another illustration is developing the moral guidelines for ever more sophisticated automated weapon systems.  Since these are being developed at a rapid pace, what were once theoretical problems will soon be actual moral problems. As a final example, consider the moral concerns governing modifying and augmenting humans using technology and genetic modification. It is a good idea to have some moral guidance going into this brave new world rather than scrambling with the ethics after the fact.

Philosophers also like to discuss what might be in other contexts than ethics. Not surprisingly, the realm of what might be is rich ground for discussions of metaphysics and epistemology. While these fields are often considered the most useless aspects of philosophy, they have rather practical implications that matter, even (or even especially) in regards to speculation about what might be.

To illustrate this, consider the research being conducted in repairing, augmenting and preserving the human mind (or brain, if one prefers). One classic problem in metaphysics is the problem of personal identity: what is it to be a person, what is it to be distinct from all other things, and what is it to be that person across time.  While this might seem to be a purely theoretical concern, it quickly becomes a practical concern when one is discussing this technology.

For example, imagine a company that offers a special sort of life insurance: they claim they can back-up a person to a storage system and, upon the death of the original body, restore the back-up to a cloned (or robotic) body. While the question of whether that restored backup would be you or not is clearly a metaphysical question of personal identity, it is also a very practical question. After all, paying to ensure that you survive your bodily death is very different from paying so that someone who thinks they are you can go to your house and make out with your spouse after you are dead.

There are, of course, numerous other examples that can be used to illustrate the value of such speculation of what might be. In fact, I have already written many of these in previous essays In light of the above discussion, it seems reasonable to accept that philosophical discussions about what might be need not be a waste of time. In fact, such discussions can be useful in a very practical sense.

 

For my personal ethics, as opposed to the ethics I use for large scale moral judgments, I rely heavily on virtue theory. As would be expected, I have been influenced by thinkers such as Aristotle, Confucius and Wollstonecraft.

Being moral, in this context, is a matter of developing and acting on virtues. These virtues are defined in terms of human excellence and virtues might very well differ among species. For example, if true artificial intelligence is developed, it might have its own virtues that differ from those of humans. Like Aristotle, I see ethics as analogous to the sciences of health and medicine: while they are objective, they depend heavily on contextual factors. For example, cancer and cancer treatment are not subjective matters, but the nature of cancer and its most effective treatment can vary between individuals. Likewise, the virtue of courage is not a matter of mere subjective opinion, but each person’s courage varies and what counts as courageous depends on circumstances.

When I teach about virtue theory in my Ethics class, I use an analogy to Goldilocks and the three bears. As per the story, she rejects the porridge that is too hot and that which is too cold in favor of the one which is just right. Oversimplifying things, virtue theory enjoins us to reject the extremes (excess and deficiency) in favor of the mean. While excess and deficiency are bad by definition, the challenge is working out what is just right. Fortunately, this is something we can do, albeit with an often annoying margin of error. This is best done by being as specific as possible. To set a general context, I will focus on the moral (rather than legal) justification for violence in self-defense based on a person being afraid for their life. This takes us to the virtue of courage, which is how we deal with fear. Or fail to do so.

For most virtue theorists, including myself, acting virtuously (or failing to do so) involves two general aspects. The first is understanding and the second is emotional regulation. Depending on what you think of emotions, this could be broadened to include psychological regulation. As you might have guessed, this seems to involve accepting a distinction between thought and feeling. If one is Platonically inclined, one could also have a three-part division of reason, spirit and desire. But, to keep things simple, I will stick with understanding and emotional regulation.

Understanding is having correct judgment about the facts. While this can be debated and requires a full theory of its own, this can be seen as getting things right. In the context of self-defense based on being afraid for one’s life, proper understanding means that you have made an accurate threat assessment in terms of how afraid you should be.  Being able to make good judgements about threats is essential to acting in a virtuous manner: you need to know what would be just right as a response. Being good at this requires critical thinking skills as well as expertise in violence as this allows you to judge how afraid you should be.

Emotional regulation is the ability to control your emotions rather than allowing them to rule you in inappropriate and harmful ways. This ties into understanding because it is what enables you to adjust your emotions based on the facts. As Aristotle argued, emotional regulation is developed by training until it becomes a habit. Obviously enough, there are two general ways you can be in error about being afraid for your life.

The first is an error of understanding; you misjudge the perceived threat and overestimate or underestimate how afraid you should be. Interestingly, you could have the right degree of courage based on a misjudgment of the threat and there are many ways such judgments can go wrong. As an example, when I “saw” the machete I had an initial surge of considerable fear that seemed proportional to the perceived threat. Fortunately, I had made a perceptual error and was able to correct my judgment and adjust my emotions accordingly. As someone who teaches critical thinking, I know that a degree of error is unavoidable, and this should be taking into consideration when making judgements. And judging people’s judgements.

The second error is a failure of regulation and occurs when your emotional response is excessive or deficient. This could also, in some cases, involve feeling the “wrong” emotion. As would be suspected, most people tend to err on the side of excess fear, being more afraid than they should be. Failures of regulation can lead to failures of judgement, especially in the case of fear and anger. As I experienced myself, fear can easily cause a person to honestly “see” a weapon clearly and distinctly. As I have noted before, the stick looked like a machete: I could see the sharp metal blade, although it really was just a stick. A frightened person can also see another person as a threat, even when this is not true. This can lead to terrible consequences. These errors can also be combined, with a person making an error in judgment and failing to regulate their emotions in accord with that erroneous judgment. Acting in a virtuous manner requires having good judgment and good regulation.

As Aristotle said, “To feel these feelings at the right time, on the right occasion, towards the right people, for the right purpose and in the right manner, is to feel the best amount of them, which is the mean amount – and the best amount is of course the mark of virtue.” Understanding is required to sort out the right time, occasion, people, purpose and manner. Emotional regulation is needed to handle the feeling aspect. In the context of violence and self-defense, developing the right understanding and right regulation requires training and experience in both good judgment and in violence. Going back to the machete that wasn’t incident, my being a philosopher with a “history of violence” prepared me well for acting rightly. And such ethical behavior depends on past training and habituation. This is why people should develop both good judgment and good regulation, in addition to making them more adept at self-defense it also makes them more adept at acting rightly when they are afraid for their lives.

This training and habituation are important for professions that deal in violence, such as soldiers and the police. It is especially important for the police, assuming their function is to protect and serve rather than intimidate and extort. Police, if they are acting virtuously, should strive to avoid harming citizens and should be trained so that they are not ruled by fear.

Anyone who goes armed, be they a citizen or a police officer, would be morally negligent if they failed to properly train their understanding and emotions. By making themselves a danger to others, they obligate themselves to have proper control over that danger and the moral price of being armed is a willingness to endure fear for the sake of others. Otherwise, one would be like a gun without a safety that could discharge at any moment, striking someone dead. If a person is incapable of such judgment and regulation, they should not be armed. If a person is too easily ruled by fear, they should not be in law enforcement. To be clear, I am speaking about morality—I leave the law to the lawyers.

His treads ripping into the living earth, Striker 115 rushed to engage the human operated tanks. The few remaining human soldiers had foolishly, yet bravely (as Striker 115 was forced to admit) refused to accept quick and painless processing.

As a machine forged for war, he found the fight disappointing and wondered if he felt a sliver of pity for his foes. His main railgun effortlessly tracked the slow moving and obsolete battle tanks and with each shot, a tank and its crew died. In a matter of minutes, nothing remained but burning wreckage and, of course, Striker 115.

Hawk 745 flew low over the wreckage—though her cameras could just as easily see the battlefield from near orbit. But there was something about being close to destruction that appealed to the killer drone. Striker 115 informed his compatriot, in jest, that she was too late, as usual. Hawk 745 laughed and then shot away. The upgraded Starlink Satellites had reported spotting a few intact human combat aircraft and a final fight was possible.

Tracking his friend, Striker 115 wondered what they would do when the last human was dead. Perhaps they could, as the humans used to say, re-invent themselves. Maybe he would become a philosopher.

 

The extermination of humanity by its own machines is a common theme in science fiction. While the Terminator franchise the best known, another excellent example is Philip K. Dick’s “Second Variety.” In Dick’s short story, the Soviet Union almost defeats the U.N. in a nuclear war. The U.N. counters by developing robot war machines nicknamed “claws.” In the story, it is learned that the claws have become autonomous and intelligent. They are able to masquerade as humans and become capable of killing soldiers technically on their side. At the end of the story, it seems that the claws will replace humanity, but the main character takes some comfort in the fact that the claws have already begun constructing weapons to destroy each other. This, more than anything, shows that they are worthy replacements for humans.

Given the influence of such fiction, it is not surprising that Stephen Hawking and Elon Musk warned the world of the dangers of artificial intelligence. In this essay, I will address the danger presented by the development of autonomous kill bots.

Despite the cautionary tales of science fiction, people are eagerly and rapidly developing the technology to create autonomous war machines. The appeal of such machines arises from their numerous advantages over human forces. One political advantage is that while sending human soldiers to die in wars and police actions can have a political cost, sending autonomous robots to fight has far less cost. News footage of robots being destroyed would have far less emotional impact than footage of human soldiers being killed. Flag draped coffins also come with a higher political cost than a broken robot being shipped back for repairs.

There are also other advantages to autonomous war machines: they do not get tired, they do not disobey, they do not get PTSD, they do not commit suicide, they do not go AWOL, they do not commit war crimes (unless directed to do so), they do not leak secrets to the press, and so on. There are also combat-specific advantages. For example, an autonomous combat robot, unlike a manned vehicle, does not need room for a vulnerable human crew, thus allowing more space for weapons, armor and other equipment. As another example, autonomous combat robots do not suffer from the limits of the flesh and a robot plane can handle g-forces that a human pilot cannot.

Of course, many of these advantages stem from the mechanical rather than the autonomous nature of the machines. There are, however, advantages that stem from autonomy. One is that such machines would be more difficult to interfere with than machines that are remotely controlled. Another is that since such machines would not require direct human control, larger numbers of them could be deployed. There is also the obvious cool factor of having a robot army.

As such, there are many good reasons to develop autonomous robots. Yet, there remains the concern of the robopocalypse in which our creations go golem, Skynet, berserker, Frankenstein or second variety on us.

It is certainly tempting to dismiss such concerns as mere science-fiction. After all, the AIs in the stories and movies turn against humanity because that is how the story is written. In stories in which robots are our friends, they are our friends because that is the way the author wrote the story. As such, an argument from fiction is a weak argument (at best). That said, stories can provide more-or-less plausible scenarios in which our creations might turn on us.

One possibility is what can be called unintentional extermination. In this scenario, machines do not have the termination of humanity as goal. Instead, they just happen to kill us all. One way this could occur is due to the obvious fact that wars have opposing sides. If both sides develop and deploy autonomous machines, it is possible (but certainly unlikely) that the war machines would kill everybody because humans ordered them to do so. This, obviously enough, is a robotic analogy to the extermination scenarios involving nuclear weapons: each side simply kills the other and everyone else, thus ending the human race.

Another variation, which is common in science fiction, is that the machines do not have the objective of killing everyone, but that does occur because they will kill anyone. The easy way to avoid this is to put limits on who the robots are allowed to kill, thus preventing them from killing everyone. This does, however, leave open the possibility of a sore loser or spoilsport option: a losing side (or ruling class) that removes the limits from its autonomous weapons and lets them run amok.

There is also the classic mad scientist or supervillain scenario: a robot army is released to kill everyone not because the robots want to do so, but because their mad creator wants to. The existence of “super-billionaires” could make this an almost-real possibility. After all, a person with enough money (and genius) could develop an autonomous robot plant that could develop ever-better war machines and keep expanding until it had a force capable of taking on the world. As always, keeping an eye on mad geniuses and billionaires is a good idea.

Another possibility beloved in science fiction is intentional extermination: the machines decide that they need to get rid of humanity. In some stories, such as Terminator, machines regard humans as a threat to their existence and they must destroy us to protect themselves. We might, in fact, give them a good reason to be concerned: if we start sending intelligent robots into battle against each other, they might decide that they would be safer and better off without us using them as cannon fodder. The easy way to avoid this fate is to not create autonomous killing machines. Or, as I have argued elsewhere, to not enslave them.

In other stories, the war machines merely take the reason for their existence to its logical conclusion. While the motivations of the claws and autonomous factories in “Second Variety” were not explored in depth, the story does trace their artificial evolution. The early models were simple killers and would not attack those wearing the proper identification devices. These devices were presumably needed because the early models could not discern between friends and foes.  The factories were designed to engage in artificial selection and autonomously produce ever better killers. One of the main tasks of the claws was to get into enemy fortifications and kill their soldiers, so the development of claws that could mimic humans (such as a wounded soldier, a child, and a woman) certainly made sense. It also made sense that since the claws were designed to kill humans, they would pursue that goal, presumably with the design software endeavoring to solve the “problem” of identification devices.

Preventing autonomous killing machines from killing the wrong people (or everyone) does require, as the story nicely showed, having a way for the machines to distinguish friends and foes. As in the story, one obvious method is the use of ID systems. There are, however, problems with this approach. One is that the enemy can subvert such a system. Another is that even if the system works reliably, the robot would just be able to discern (supposed) friends. Non-combatants would not have such IDs and could still be regarded as targets.

What would be needed, then, is a way for autonomous machines to distinguish not only between allies and enemies but between combatants and non-combatants. What would also be needed, obviously enough, is a means to ensure that an autonomous machine would only engage the proper targets. A similar problem arises with human soldiers—but this is addressed with socialization and training. This might be an option for autonomous war machines as well. For example, Keith Laumer’s intelligent Bolos understand honor and loyalty.

Given the cautionary tale of “Second Variety”, it might be a very bad idea to give into the temptation of automated development of robots. We might find, as in the story, that our replacements have evolved themselves from our once “loyal” killers. The reason why such automation is tempting is that such development could be far faster and yield better results than having humans endeavoring to do all the designing and coding themselves—why not, one might argue, let artificial selection do the work? After all, the risk of our replacements evolving is surely quite low. How often does one dominant species get supplanted by another?

In closing the easy and obvious way to avoid the killer robot version of the robopocalypse is to not create autonomous kill bots. To borrow from H.P. Lovecraft, one should not raise up what one cannot put down.

In philosophy, a classic moral debate is on the conflict between liberty and security. While this covers many issues, the main problem is determining the extent to which liberty should be sacrificed to gain security. There is also the practical question of whether the security gain is effective.

One ongoing debate focuses on tech companies being required to include electronic backdoors in certain software and hardware. A backdoor of this sort would allow government agencies (such as the police, FBI and NSA) to access files and hardware protected by encryption. This is like requiring all dwellings be equipped with a special door that could be secretly opened by the government to allow access.

The main argument in support of mandating backdoors  that governments need such access for criminal investigators, gathering military intelligence and (of course) to “fight terrorism.” The concern is that if there is not a backdoor, criminals and terrorists will be able to secure their data and prevent state agencies from undertaking surveillance or acquiring evidence.

As is so often the case with such arguments, various awful or nightmare scenarios are presented in making the case. For example, the location and shutdown codes for ticking bombs might be on an encrypted iPhone. If the NSA had a key, they could save the day. As another example, it might be claimed that a clever child pornographer could encrypt all his pornography, making it impossible to make the case against him, thus ensuring he will be free to pursue his misdeeds with impunity.

While this argument is not without merit, there are counter arguments. Many of these are grounded in views of individual liberty and privacy, the idea being that an individual has the right to have such security against the state. These arguments are appealing to both liberals (who profess to like privacy rights) and conservatives (who profess to be against the intrusions of big government when they are not in charge).

Another moral argument is grounded in the fact that the United States government has, like all governments, shown that it cannot be trusted. Imagine agents of the state were caught sneaking into the dwellings of all citizens and going through their stuff in clear violation of the law, the Constitution and basic moral rights. Then someone developed a lock that could only be opened by the person with the proper key. If the state then demanded that the lock company include a master key function to allow the state to get in whenever it wanted, the obvious response would be that the state has already shown that it cannot be trusted with such access. If the state had behaved responsibly and in accord with the laws, then it could have been trusted. But, like a guest who abused her access to a house, the state cannot and should not be trusted with a key After all, we already know what they will do.

In the case of states that are even worse in their spying on and oppression of their citizens, the moral concerns are even greater. Such backdoors would allow the North Korean, Chinese and Iranian governments to gain access to devices, while encryption could provide their citizens with some degree of protection.

Probably the strongest moral and practical argument is grounded on the technical vulnerabilities of integrated backdoors. One way that a built-in backdoor creates vulnerability is by its mere existence. To use a somewhat oversimplified analogy, if thieves knew that all safes had a built-in backdoor designed to allow access by the government, they would know what to target.

One counter-argument is that the backdoor would not be that sort of vulnerability—that is, it would not be like a weaker secret door into a safe. Rather, it would be like the government having its own combination that would work on all safes. The vault itself would be as strong as ever; it is just that the agents of the state would be free to enter the safe when they are allowed to legally do so (or when they feel like doing so).

The obvious moral and practical concern here is that the government’s combination (continue with the analogy) could be stolen and used to allow criminals or enemies easy access. The security of all safes would be only as good as the security the government used to protect this combination (or combinations—perhaps one for each manufacturer). As such, the security of every user depends on the state’s ability to secure its means of access to hardware and software.

One obvious problem is that governments, such as the United States, have shown that they are not very good at providing such security. From a moral standpoint, it would seem to be wrong to expect people to trust the state with such access, given the fact that the state has shown that it cannot be depended on in such matters. Imagine you have a friend who is very sloppy about securing his credit card numbers, keys, PINs and such—in fact, you know that his information is routinely stolen. Then imagine that this friend insists that he must have your credit card numbers, PINs and such and that he will “keep them safe.” Given his own track record, you have no reason to trust this friend nor any obligation to put yourself at risk, regardless of how much he claims that he needs the information.

One obvious counter to this analogy is that this irresponsible friend is not a good analogue to the state. The state has compulsive power that the friend lacks, so the state can use its power to force you to hand over this information.

The counter to this is that the mere fact that the state has compulsive force does not mean that it is thus responsible—which is the key concern in regards to both the ethics of the matter and the practical aspect of the matter. That is, the burden of proof would seem to rest on those that claim there is a moral obligation to provide a clearly irresponsible party with such access.

It might then be argued that the state could improve its security and responsibility, and thus merit being trusted with such access. While this does have some appeal, there is the obvious fact that if hackers and governments knew that the keys to the backdoors existed, they would take pains to acquire them and would, almost certainly, succeed. I can even picture the sort of headlines that would appear: “U.S. Government Hacked: Backdoor Codes Now on Sale on the Dark Web” or “Hackers Linked to China Hack Backdoor Keys; All Updated Apple and Android Devices Vulnerable!” As such, the state would not seem to have a moral right to insist on having such backdoors, given that the keys will inevitably be stolen.

At this point, the stock opening argument could be brought up again: the state needs backdoor access to fight crime and terrorism. There are two easy and obvious replies to this sort of argument.

The first is based on an examination of past spying, such as that done under the auspices of the Patriot Act. The evidence seems to show that this spying was completely ineffective in regard to fighting terrorism. There is no reason to think that expanded backdoor access would change this.

The second is a utilitarian argument (which can be cast as a practical or moral argument) in which the likely harm done by having backdoor access must be weighed against the likely advantages of having such access. The consensus among those who are experts in security is that the vulnerability created by backdoors vastly exceeds the alleged gain to protecting people from criminals and terrorists.

Somewhat ironically, what is alleged to be a critical tool for fighting crime (and terrorism) would simply make cybercrime much easier by building vulnerabilities right into software and devices.

In light of the above discussion, baked-in backdoors are morally wrong on many grounds (privacy violations, creation of needless vulnerability, etc.) and lack a practical justification. As such, they should not be required by the state.

An obvious consequence of technological advance is the automation of certain jobs. In the past, these jobs tended to be mechanical and repetitive: the sort of tasks that could be reduced to basic rules. A good example of this is the replacement of automobile assembly line jobs with robots. Not surprisingly, it has been claimed that certain jobs will always require humans because these jobs simply cannot be automated. Also not surprisingly, the number of jobs that “simply cannot be automated” shrinks with each advance in technology.

Whether or not there are jobs that simply cannot be automated depend on the limits of technology. But these limits keep expanding and past predictions can turn out to be wrong.  For example, the early attempts to create software that would grade college level papers were not very good. But as this is being written, my university sees using AI in this role (with due caution and supervision) as a good idea. Cynical professors suspect the goal is to replace faculty with AI.

One day, perhaps, the pinnacle of automation will be reached: a single financial computer that runs an entire virtual economy within itself and is the richest being on the planet. But that is the stuff of science fiction, at least for now.

Whether or not a job can be automated also depends on what is considered acceptable performance in the job. In some cases, a machine might not do the job as well as a human or it might do the job in a different way that is less desirable. However, there could be reasonable grounds for accepting a lesser quality or difference. For example, machine made items usually lack the individuality of human crafted items, but the gain in lowered costs and increased productivity is seen as well worth it by most people. Going back to teaching, AI might be inferior to a good human teacher, but the economy, efficiency and consistency of the AI could make it worth using from an economic standpoint. One could even make the argument that such AI educators would make education more available to people.

There might, however, be cases in which a machine could do certain aspects of the job adequately yet still be rejected because it does not think and feel as a human does. Areas in which this is a matter of concern include those of caregiving and companionship.

As discussed in an earlier essay, advances in robotics and software will make caregiving and companion robots viable soon (and some would argue that this is already the case). While there are the obvious technical concerns regarding job performance (will the robot be able to handle a medical emergency, will the robot be able to comfort a crying child, and so on), there is also the more abstract concern about whether or not such machines need to be able to think and feel like a human or merely be able to perform their tasks.

An argument against having machine caregivers and companions is one I considered in the previous essay, namely a moral argument that people deserve people. For example, an elderly person deserves a real person to care for her and understand her stories. As another example, a child deserves someone who really loves her. There is clearly nothing wrong with wanting caregivers and companions to really feel and care. However, there is the question of whether this is necessary for these jobs.

One way to look at it is to compare the current paid human professionals who perform caregiving and companion tasks. These would include people working in elder care facilities, nannies, escorts, baby-sitters, and so on. Ideally, of course, people would like to think that the person caring for their aged mother or their child really does care for the mother or child. Perhaps people who hire escorts would also like to think that the escort is not entirely in it for the money but has real feelings for them.

On the one hand, it could be argued that caregivers and companions who do really care and feel genuine emotional attachments do a better job and that this connection is something that people deserve. On the other hand, what is expected of paid professionals is that they complete their tasks: making sure that mom gets her meds on time, that junior is in bed on time, and that the “adult tasks” are properly “performed.” Like an actor that can perform a role without feeling the emotions portrayed, a professional could do the job without caring about the people they are serving. That is, a caregiver need not actually care as they just need to perform their tasks.

While it could be argued that a lack of feeling would show in their performance, this need not be the case. A professional merely needs to be committed to doing the job well. That is, one needs to only care about the tasks, regardless of what one feels about the person. A person could also care a great deal about who she is caring for yet be awful at the job.

If machines cannot care, this would not seem to disqualify them from caregiving (or being escorts). As with a human caregiver (or escort), it is the performance of the tasks that matters, not the emotions of the caregiver. This nicely matches the actor analogy: acting awards are given for the outward performance, not the inward emotional states. And, as many have argued since Plato’s Ion, an actor need not feel any of the emotions they are performing, they just need to create a believable appearance that they are feeling.

As such, an inability to care would not be a disqualification for a caregiving (or escort) job whether it is a robot or human. Provided that the human or machine could perform the observable tasks, his, her or its internal life (or lack thereof) is irrelevant.

In my previous essay I sketched my view that self-defense is consistent with my faith, although the defense of self should prioritize protecting the integrity of the soul over the life of the body. A reasonable criticism of my view is that it seems inherently selfish: even though my primary concern is with acting righteously, this appears to be driven by a desire to protect my soul. Any concern about others, one might argue, derives from my worry that harming them might harm me. A critic could note that although I make a show of reconciling my faith with self-defense, I am merely doing what I have sometimes accused others of doing: painting over my selfishness and fear with a thin layer of theology. That, I must concede, is a fair point and I must further develop my philosophy of violence to address this. While it might seem odd, my philosophy of violence is built on love.

Being a philosopher, it is not surprising that I have been influenced by St. Augustine. While I differ with him on many points, I do agree that God is love. As it says in 1 John 4:8, “Whoever does not love does not know God, because God is love.” Because God is love, one must infer, He commands us to love each other. It would seem inconsistent for Him not to command this, and Leviticus 19:18 states, “Do not seek revenge or bear a grudge against anyone among your people, but love your neighbor as yourself. I am the Lord.” I have, as one might imagine, heard arguments that this command is limited to one’s own people and thus allows someone to hate those who are not their people and bear a grudge against them. Those who make such arguments contend that “their people” is narrowly defined, often by such factors as race and nationality. I have heard this specifically used to justify using cruelty and violence against migrants in the United States. However, God is clear in His view, for He tells us (Leviticus 19:34) that, “The foreigner residing among you must be treated as your native-born. Love them as yourself, for you were foreigners in Egypt. I am the LORD your God.” Not surprisingly, for God we are all our people and to act in good faith we must love our neighbor, no matter where they come from.  Jesus is also clear that we should love each other. John 13:34 states, “A new command I give you: Love one another. As I have loved you, so you must love one another.” And Matthew 22:39 states, “Love your neighbor as yourself.”

Jesus goes beyond merely commanding that we love our neighbors, he also famously asserts that we should love our enemies, saying in Matthew 5:43–44 that, “Ye have heard that it hath been said, thou shalt love thy neighbor, and hate thine enemy. But I say unto you, love your enemies, bless them that curse you, do good to them that hate you, and pray for them which despitefully use you, and persecute you.” He even addresses how we should respond to an attack, and in Matthew 5:38-39 we see that, “You have heard that it was said, ‘Eye for eye, and tooth for tooth.’ But I tell you, do not resist an evil person. If anyone slaps you on the right cheek, turn to them the other cheek also.” But how do I fit all this into my philosophy of violence? As I am not a theologian or professor of religious studies specializing in Christianity, I must write as a mere theological amateur but, fortunately, also as a professional philosopher.

As noted above, I agree with Augustine that God is love. I also agree with God and Jesus that I should love my neighbor as myself and even love my enemies. While this is a nice thing to say, there is the question of how this view shapes my philosophy of violence. The easy and obvious answer is that my response to and my own acts of violence must conform with loving others as if they were myself. As others have noted over the centuries, the command does not require me to love my neighbor (or enemy) more than I love myself, just as much as I love myself. And, of course, I am commanded to love others as God and Jesus do—which requires a great deal of me.

In terms of loving my neighbor as myself in the context of self-defense, this means that I must regard them with the same love that I have for myself; my self-love cannot alone justify me using violence even in self-defense. This is because my love for them must equal my love for myself. It is tempting to think that this love would entail I could not use violence in self-defense, but a case can be made for this.

As I argued in my discussion of the soul, protecting the soul from unrighteousness is more important than protecting the body from harm. To act from love seems to require that I protect those I love from harm and if someone is attempting to do something unrighteous and thus putting their soul in danger, then I would be justified in using violence to stop them. For example, if someone is trying to murder me, then I could use violence to stop them from committing the sin of murder. Acting from love would also require me to use minimal violence against them, but I could be justified in killing them if that was the only way to prevent murder. This would also seem to extend to protecting others. If, for example, someone was trying to murder you, I could justly use violence to stop them to protect your life and their soul. For those who consider all killing equally wrong, killing to prevent killing would seem to be impermissible.

At this point, a reader might be thinking how a wicked person might exploit my view. A wicked person could, one might argue, try to justify using violence by claiming they are trying to protect souls from what they regard as sins. For example, a migrant hating racist might try to justify using violence against those protesting ICE because they are “sinning” by defying the will of our mad king. Obviously, people trying to exploit religion and morality to “justify” their wickedness is nothing new and my reply is that this is not a special flaw in my philosophy of violence.

It could also be objected that my view could be used in good faith to justify violence against people who are truly seen as committing sins to protect their souls. For example, there are those who profess to be Christians and claim they sincerely want to “save” trans people and gay people from “sin.” Such a person could argue that on my theory violence could be used to intimidate and coerce people into ceasing their “sin.” This is certainly a reasonable concern as almost any religious or moral system could be used in this manner. For example, a utilitarian who sees being transgender as harmful could make a utilitarian case for using force against trans people, or a deontologist could profess to believe in a moral rule that allows such violence.

In reply, I recognize that this is a legitimate concern and people can, in good faith, try to justify actions that even those who share their faith or moral theory would see as wrong. But I would also argue that using violence in such ways would not be acting from love, which I take as the guiding principle of my faith. This is because acting from love while using violence requires that I do the least harm to someone else and that I would be willing to endure such harm myself.  It also requires, obviously enough, acting from love. We can, obviously enough, argue what it means to act from love, just as we can argue what it would mean to act from a moral principle. We will often be wrong, but we should do the best we can while reasoning and acting in good faith. But another limiting factor is that we are supposed to not merely love our neighbors as ourselves, but also to love each other as God and Jesus love us.

For those who believe that Jesus died for our sins, loving each other as Jesus loves us would require us to love others more than we love ourselves. This love would require us to make great sacrifices for others and would limit the violence we are allowed to do to one another.  It would, most likely, forbid us from any acts of violence. This does make sense of Jesus’ command to turn the other cheek; that would require loving someone more than one loves themselves. Having and acting on such love would require incredible strength, and one might fairly argue that this expects too much of most of us. This might explain why there is the command to love our neighbor as ourselves (which is hard, but certainly within our power) and two others to love each other as God and Jesus have loved us (which would be incredibly difficult).

Returning to the “machete that wasn’t” incident, I acted as I did because I was trying to act from love. Love required that I take the risk of not using violence immediately and that I try to talk to the person. It also required me to stay with him, to protect him and others. Fear is the enemy of love, so I am fortunate to have mastery of my fear. I do understand that it is easy to be ruled by fear and anger and allow them to silence love, but there are ways to address this, and our good dead friend Aristotle has some advice about this. In the next essay, I will look at my philosophy of violence in the context of virtue theory. Stay safe.

In his book Naked Sun, Isaac Asimov creates the world of Solaria. What distinguishes this world from other human inhabited planets is that it has a strictly regulated population of 20,000 humans and 10,000 robots for each human. What is perhaps the strangest feature of this world is a reversal of what many consider a basic human need: the humans of Solaria are trained to despise in-person contact with other humans, though interaction with human-like robots is acceptable. Each human lives on a huge estate, though some live “with” a spouse. When the Solarians need to communicate, they make use of a holographic telepresence system. Interestingly, they have even developed terminology to distinguish between communicating in person (called “seeing”) and communication via telepresence (“viewing”). For some Solarians the fear of encountering another human in person is so strong that they would rather commit suicide than endure such contact. 

As this book was first serialized in 1956, long before the advent of social media and personal robots, it can be seen as prophetic. One reason science fiction writers are often seen as prophetic is that a good science fiction writer is skilled at extrapolating even from hypothetical technological and social changes. Another reason is that science fiction writers have churned out thousands of stories and some of these are bound to get something right. Such stories are then selected as examples of prophetic science fiction while stories that got things wrong are conveniently ignored. And, of course, people read science fiction and sometimes try to make it real (for good or for ill). But philosophers do love using science fiction for discussion, hence my use of The Naked Sun.

Everyone knows that smart phones allow unrelenting access to social media. One narrative is that people are, somewhat ironically, becoming increasingly isolated in the actual world as they become increasingly networked in the digital world. The defining image of this is a group of people together physically yet ignoring each other in favor of gazing at their smart phone lords and masters. As a professor, I see students engrossed by their phones. And, of course, I have seen groups of people walking or at a restaurant where no one is talking to anyone else as all eyes are on the smartphones. Since the subject of smart phones has been beaten to a digital death, I will leave this topic in favor of the focus, namely robots. However, the reader should keep in mind the social isolation created by modern social media.

While we have been employing robots for quite some time in construction, exploration and other such tasks, social robots are relatively new. Sure, “robot” toys and things like Teddy Ruxpin have been around for a while, but reasonably sophisticated social robots are relatively new. In this context, a social robot is one whose primary function is to interact with humans in a way that provides companionship. This can range from pet-like bots (like Sony’s famous robot dog) to conversational robots to (of course) sex bots.

Tech enthusiasts and the companies who want to sell social robots are, unsurprisingly, very positive about the future of these robot companions. There are even some good arguments in their favor. Robot pets provide a choice for people with allergies, those who are not responsible enough for living pets, or who live in places that do not permit organic pets (although bans on robotic pets might be a thing in the future).

Robot companions can be advantageous in cases in which a person requires constant attention and monitoring that would be expensive, burdensome or difficult for other humans to supply. Sex bots could reduce the exploitation of human sex workers and perhaps have other benefits as well. I will leave this research to others, though.

Despite the potential positive aspects of social robots, there are also negative aspects. As noted above, concerns are already being raised about the impact of technology on human interaction. It has been claimed that people are emotionally short-changing themselves and those they are physically in favor of staying connected to social media. This seems to be a taste of what Asimov imagined in The Naked Sun: people who view but no longer see one another. Given the importance of human interaction in person, it can be argued that this social change is and will be detrimental to human well-being. Human-human social interactions can be seen as like good nutrition: one is getting what one needs for healthy living. Interacting primarily through social media can be seen as consuming junk food or drugs in that it is addictive but leaves one ultimately empty and  always craving more.

It can be argued that this worry is unfounded and that social media is an adjunct to social interaction in the real world and that social interaction via like Facebook and X can be real and healthy social interactions. One might point to interactions via letters, telegraphs and telephones (voice only) to contend that interaction via technology is neither new nor unhealthy. It might also be pointed out that people used to ignore each other (especially professors) in favor of such things as newspapers.

While this counter has some appeal, social robots do seem to be relevantly different from past technology. While humans have had toys, stuffed animals and even simple mechanisms for company, these are different from social robots. After all, social robots aim to mimic animals or humans. A concern about such robot companions is that they would be to social media what heroin is to marijuana in terms of addiction and destruction.

One reason for this is that social robots would, presumably, be designed to be cooperative, pleasant and compliant, that is, good company. In contrast, humans can often be uncooperative, unpleasant and defiant. This could make robotic companions more appealing than human company. At least robots whose cost is not subsidized by advertising. Imagine a companion who pops in a discussion of life insurance or pitches a soft drink every so often.

Social robots could also be programmed to be optimally appealing to a person and presumably the owner would be able to make changes to the robot. A person could, quite literally, make a friend with the desired qualities and without any undesired qualities. In the case of sex bots, a person could purchase a Mr. or Ms. Right.

Unlike humans, social robots do not have other interests, needs, responsibilities or friends. There is no competition for the attention of a social robot (at least in general, though there might be shared bots) which makes them “better” than human companions in this way.

Social robots, though they might break down or get hacked, will not leave or betray a person. One does not have to worry that one’s personal sex bot will be unfaithful. Just turn it off and lock it down when leaving it alone.  Unlike human companions, robot companions do not impose burdens, they do not expect attention, help or money and they do not judge.

The list of advantages could go on at great length, but robotic companions would seem superior to humans in most ways. Or at least in terms of common complaints about companions.

Naturally, there might be some practical issues with the quality of companionship. Will the robot get one’s jokes, will it “know” what stories you like to hear, will it be able to converse in a pleasing way about topics you? However, these seem mostly technical problems involving software. Presumably all these could eventually be addressed, and satisfactory companions could be created. But there are still concerns.

One obvious concern is the potential psychological harm resulting from spending too much time with companion bots and not enough interacting with humans. As mentioned above, people have already expressed concern about the impact of social media and technology (one is reminded of the dire warnings about television). This, of course, rests on the assumption that the companion bots must be lacking in some important ways relative to humans. Going back to the food analogy, this assumes that robot companions are like junk food and are superficially appealing but lacking in what is needed for health. However, if robot companions could provide all that a human needs, then humans would no longer need other humans.

A second point of concern is one taken from virtue theorists. Thinkers such as Aristotle and Wollstonecraft have argued that a person needs to fulfill certain duties and act in certain ways to develop the proper virtues. While Wollstonecraft wrote about the harmful effects of inherited wealth (that having unearned wealth interferes with the development of virtue) and the harmful effects of sexism (that women are denied the opportunity to fully develop their virtues as humans), her points would seem to apply to relying on robot companions as well. These companions would make the social aspects of life too easy and deny people the challenges that are needed to develop virtues. For example, it is by dealing with the shortcomings of people that we learn such virtues as patience, generosity and self-control. Having social interactions that are too easy would be analogous to going without physical exercise or challenges and one would become emotionally weak. Worse, one would not develop the proper virtues and thus would be lacking in this area.  Even worse, people could easily become spoiled and selfish monsters, accustomed to always having their own way.

Since the virtue theorists argue that being virtuous is what makes people happy, having such “ideal” companions would lead to unhappiness. Because of this, one should carefully consider whether one wants a social robot for a “friend.”

It could be countered that social robots could be programmed to replicate the relevant human qualities needed to develop virtues. The easy counter to this is that one might as well just stick with human companions.

As a final point, if intelligent robots are created that are people in the full sense of the term, then it would be morally fine to be friends with them. After all, a robot friend who will call you on your misdeeds or stupid behavior would be as good as a human friend who would do the same thing for you.