In philosophy skepticism is the view that we seem to lack knowledge. There are numerous varieties of skepticism which are defined by the extent of  doubt endorsed by the skeptic. A relatively mild case of skepticism might involve doubts about metaphysical claims while a truly rabid skeptic would doubt everything—including their own existence. 

While many philosophers have attempted to defeat the dragon of skepticism, all these attempts seem to have failed. This is hardly surprising—skepticism seems to be unbreakable. The reasons for this have an ancient pedigree and can be distilled down to two simple arguments.

The first goes after the possibility of justifying a belief and attacks the view that knowledge requires a belief that is true and justified. If a standard of justification is presented, then there is the question of what justifies that standard. If a justification is offered, then the same question can be raised to infinity. And beyond. If no justification is offered, then there is no reason to accept the standard.

A second stock argument for skepticism is that any reasonable argument given in support of knowledge can be countered by an equally reasonable argument against knowledge.  Some folks, such as Chisholm, claim it is fair to assume we have knowledge and begin epistemology from that point. However, this seems to have all the merit of grabbing the first-place trophy without competing.

Like all sane philosophers, I tend to follow David Hume’s view in my everyday life: my skepticism is nowhere to be seen when I am filling out my taxes, sitting in a committee meeting, or at the dentist. However, like a useless friend, it shows up when it is not needed. As such, it would be nice if skepticism could be defeated or at least rendered irrelevant.

John Locke took an interesting approach to skepticism. While, like Descartes, he seemed to want to find certainty, he settled for a practical approach. After acknowledging that our faculties cannot provide certainty, he asserted that what matters to us is the ability of our faculties to aid us in our preservation and wellbeing.

Jokingly, he challenges “the dreamer” to put his hand into a furnace—this would, he claims, wake him “to a certainty greater than he could wish.” More seriously, Locke contends that our concern is not with achieving epistemic certainty. Rather, what matters is our happiness and misery. While Locke can be accused of taking an easy out rather than engaging the skeptic in a battle of certainty or death, his approach is appealing. Since I happened to think through this essay while running with an injured back, I will use that to illustrate my view.

When I set out to run, my back began hurting immediately. While I could not be certain that I had a body containing a spine and nerves, no amount of skeptical doubt could make the pain go away—in regard to the pain, it did not matter whether I really had a back or not.  Whether I was a pained brain in a vat or a pained brain in a runner on the road it was the pain that that really mattered to me.

As I ran, it seemed that I was covering distance in a three-dimensional world. Since I live in Florida (or what seems to be Florida) I was soon feeling warm and sticky I could eventually feel my thirst and fatigue. Once more, it did not seem to really matter if this was real—whether I was really bathed in sweat or a brain bathed in some sort of nutrient fluid, the run was the same to me. As I ran, I took pains to avoid cars, trees and debris. While I did not know if they were real, I have experienced what it is like to be hit by a car  and also experienced what it is like to fall. In terms of navigating through my run, it did not matter at whether it was real or not. If I knew for sure that my run was really real for real that would not change the run. If I somehow knew it was all an illusion that I could never escape, I would still run for the sake of the experience of running.

This, of course, might seem a bit odd. After all, when the hero of a story or movie finds out that they are in a virtual reality what usually follows is disillusionment and despair. However, my attitude has been shaped by years of gaming—both tabletop (BattleTech, Dungeons & Dragons, Pathfinder, Call of Cthulhu, and so many more) and video (Zork, Doom, Starcraft, Warcraft, Destiny, Halo, and many more). When I am pretending to be a paladin, the Master Chief, or a Guardian, I know I am doing something that is not really real for real. However, the game can be pleasant and enjoyable or unpleasant and awful. This enjoyment or suffering is just as real as enjoyment or suffering caused by what is supposed to be really real for real—though I believe it is but a game.

If I somehow knew that I was trapped in an inescapable virtual reality, then I would simply keep playing the game—that is what I do. Plus, it would get boring and awful if I stopped playing. If I somehow knew that I was in the really real world for real, I would keep doing what I am doing. Since I might be trapped in just such a virtual reality or I might not, the sensible thing to do is keep playing as if it is really real for real. After all, that is the most sensible option in every case. As such, the reality or lack thereof of the world I think I occupy does not matter at all. The play, as they say, is the thing.

Those who were critical of Kim Davis, the county clerk who refused to issue marriage licenses to same-sex couples and was jailed for being in contempt of court, often appealed to a rule of law principle. The main principle seems to be that individual belief should not be used to trump the law.

Some of those who supported Davis made the point that some state and local governments have ignored federal laws covering drugs and immigration. To be more specific, it was pointed out that some states legalized (or decriminalized) marijuana even though federal law still defined it as a controlled substance. It was also pointed out that some local governments were ignoring federal immigration law and acting on their own—such as issuing identification to illegal immigrants and providing services.

Some of Davis’ supporters even noted that some who insist that Davis follow the law tolerate or even support state and local governments that ignored the federal drug an immigration laws.

One way to respond to such assertions is to claim that Davis’ defenders were using the red herring tactic. This is when an irrelevant topic is introduced to divert attention from the original issue. The tactic is to try to “win” a dispute by drawing attention away from the original argument onto another issue. If the issue is whether Davis should have followed the law, the failure of some states and local governments to enforce federal law is irrelevant. This is like a speeder who has been pulled over and argues that she should not get a ticket because another officer did not ticket someone else for speeding. What some other officer did or did not do to some other speeder is not relevant. As such, this approach would have failed to defend Davis.

In regard to the people who said Davis should follow the law yet were seemingly fine with the federal drug and immigration laws being ignored, to assert that they were wrong about Davis because of what they think about the other laws would commit the tu quoque ad hominem. This fallacy is committed when it is concluded that a person’s claim is false because it is inconsistent with something else a person said. Since fallacies are arguments whose premises fail to logically support the conclusion, this tactic would not have logically defended Davis.

Those who wanted to defend Davis could, however, have made an appeal to consistency and fairness: if it is acceptable for the states and local governments to ignore federal laws without punishment, then it would seem acceptable for Kim Davis to have ignored these laws without being punished. Those not interested in defending Davis could also have made the point that consistency does require that if Davis should have been compelled to obey the law about same-sex marriage, then the same principle should have been applied in regards to the drug and immigration laws. As such, the states and local governments that did not enforce these laws should have been compelled to enforce them and any failure to do so should have resulted in legal action against the state officials who failed to do their jobs.

This line of reasoning is plausible but can be countered by attempting to show a relevant difference (or differences) between the laws. In practice most people do not use this approach—rather, they have the “principle” that the laws they like should be enforced and the laws they oppose should not be enforced. This is, obviously enough, not a legitimate legal or moral principle.  This applies to those who like same-sex marriage (and think the law should be obeyed) and those who dislike it (and think the law should be ignored). It also applies to those who like marijuana (and think the laws should be ignored) and those who dislike it (and think the laws should be obeyed).

In terms of making the relevant difference argument, there are many possible approaches depending on which difference is as relevant. Those who wished to defend Davis might have argued that her resistance to the law was based on her religious views and hence her disobedience could have been justified on the grounds of religious liberty. Of course, there are those who opposed (and still oppose) immigration laws on religious grounds and even some who opposed the laws against drugs on theological grounds. As such, if the religious liberty argument applies in one case, it can also be applied to the others that involve religious belief. But the general approach seems to be that religious liberty is for discrimination.

Those who wanted Davis to follow the law but who opposed the enforcement of certain drug and immigration laws could have argued that Davis’ violated the constitutional rights of citizens and that this was a sufficient difference to justify a difference in enforcement. The challenge is, obviously enough, working out why this difference justified not enforcing the drug and immigration laws in question.

Another option is to argue that the violation of moral rights suffices to warrant not enforcing a law and protecting rights warrants enforcing a law. The challenge is showing that the rights of the same-sex couples overrode Davis’ claim to a right to religious liberty and showing  moral rights to use certain drugs and to immigrate even when it is illegal to do so. These things can be done but go beyond the scope of this essay.

My own view is that consistency requires the enforcement of laws. If the laws are such that they should not be enforced, then they need to be repealed. I do, however, recognize the legitimacy of civil disobedience in the face of laws that a person of informed conscience regards as unjust. But, as those who developed the theory of civil disobedience were aware, there are consequences to such disobedience.

While the problem of other minds is an epistemic matter (how does one know that another being has a mind?) there is also the metaphysical problem of determining the nature of the mind. It is often assumed that there is one answer to the metaphysical question regarding the nature of mind. However, it is certainly reasonable to keep open the possibility that there might be minds that are metaphysically very different. One area in which this might occur is in regard to machine intelligence, an example of which is Ava in the movie Ex Machina, and organic intelligence. The minds of organic beings might differ metaphysically from those of machines—or they might not.

Over the centuries philosophers have proposed various theories of mind, and it is interesting to consider which of these theories would be compatible with machine intelligence. Not surprisingly, these theories (except for functionalism) were developed to provide accounts of the minds of biological creatures.

One classic theory of mind is identity theory.  This a materialist theory of mind in which the mind is composed of matter.  What distinguished the theory from other materialist accounts of mind is that each mental state is taken as being identical to a specific state of the central nervous system. As such, the mind is equivalent to the central nervous system and its states.

If identity theory is the only correct theory of mind, then machines could not have minds (assuming they are not cyborgs with human nervous systems). This is because such machines would lack the central nervous system of a human. There could, however, be an identity theory for machine minds—in this case the machine mind would be identical to the processing system of the machine and its states. On the positive side, identity theory provides a straightforward solution to the problem of other minds: whatever has the right sort of nervous system or machinery would have a mind. But there is a negative side. Unfortunately for classic identity theory, it has been undermined by the arguments presented by Saul Kripke and David Lewis’ classic “Mad Pain & Martian Pain.” As such, it seems reasonable to reject identity theory as an account for traditional human minds as well as machine minds.

Perhaps the best-known theory of mind is substance dualism. This view, made famous by Descartes, is that there are two basic types of entities: material entities and immaterial entities. The mind is an immaterial substance that somehow controls the material substance that composes the body. For Descartes, immaterial substance thinks and material substance is unthinking and extended.

While most people are probably not familiar with Cartesian dualism, they are familiar with its popular version—the view that a mind is a non-physical thing (often called “soul”) that drives around the physical body. While this is a popular view outside of academics, it is rejected by most scientists and philosophers on the reasonable grounds that there seems to be little evidence for such a mysterious metaphysical entity. As might be suspected, the idea that a machine mind could be an immaterial entity seems even less plausible than the idea that a human mind could be an immaterial entity.

That said, if it is possible that the human mind is an immaterial substance that is somehow connected to an organic material body, then it seems equally possible that a machine mind could be an immaterial substance somehow connected to a mechanical material body. Alternatively, they could be regarded as equally implausible and hence there is no special reason to regard a machine ghost in a mechanical shell as more unlikely than a ghost in an organic shell. As such, if human minds can be immaterial substances, then so could machines minds.

In terms of the problem of other minds, there is the serious challenge of determining whether a being has an immaterial substance driving its physical shell. As it stands, there seems to be no way to prove that such a substance is present in the shell. While it might be claimed that intelligent behavior (such as passing the Cartesian or Turing test) would show the presence of a mind, it would hardly show that there is an immaterial substance present. It would first need to be established that the mind must be an immaterial substance, and this is the only means by which a being could pass these tests. It seems rather unlikely that this will be done. The other forms of dualism discussed below also suffer from this problem.

While substance dualism is the best-known form of dualism, there are other types. One other type is known as property dualism. This view does not take the mind and body to be substances. Instead, the mind is supposed to be made up of mental properties that are not identical with physical properties. For example, the property of being happy about getting a puppy could not be reduced to a particular physical property of the nervous system. Thus, the mind and body are distinct but are not different ontological substances.

Coincidentally enough, there are two main types of property dualism: epiphenomenalism and interactionism. Epiphenomenalism is the view that the relation between the mental and physical properties is one way: mental properties are caused by, but do not cause, the physical properties of the body. As such, the mind is a by-product of the physical processes of the body. The analogy I usually use to illustrate this is that of a sparkler (the lamest of fireworks): the body is like the sparkler and the sparks flying off it are like the mental properties. The sparkler causes the sparks, but the sparks do not cause the sparkler.

This view was, apparently, created to address the mind-body problem: how can the non-material mind interact with the material body? While epiphenomenalism cuts the problem in half, it still fails to solve the problem—one way causation between the material and the immaterial is fundamentally as mysterious as two-way causation. It also seems to have the defect of making mental properties unnecessary and Ockham’s razor would seem to require going with the simpler view of a physical account of the mind.

As with substance dualism, it might seem odd to imagine an epiphenomenal mind for a machine. However, it seems no more or less weird than accepting such a mind for a human being. As such, this does seem to be a possibility for a machine mind. Not a very good one, but still a possibility.

A second type of property dualism is interactionism. As the name indicates, this is the theory that mental properties can bring about changes in the physical properties of the body and vice versa. That is, interaction road is a two-way street. Like all forms of dualism, this runs into the mind-body problem. But, unlike substance dualism its does not require the much loathed metaphysical category of substance—it just requires accepting metaphysical properties. Unlike epiphenomenalism it avoids the problem of positing explicitly useless properties—although it can be argued that the distinct mental properties are not needed. This is exactly what materialists argue.

As with epiphenomenalism, it might seem odd to attribute to a machine a set of non-physical mental properties. But, as with the other forms of dualism, it is really no stranger than attributing the same to organic beings. This is, obviously, not an argument in its favor, the assertion that the view should not be dismissed from mere organic prejudice.

The final theory I will consider is the very popular functionalism. As the name suggests, this view asserts that mental states are defined in functional terms. So, a functional definition of a mental state defines the mental state in regard to its role or function in a mental system of inputs and outputs. More specifically, a mental state, such as feeling pleasure, is defined in terms of the causal relations that it holds to external influences on the body (such as a cat video on YouTube), other mental states, and the behavior of the rest of the body. 

While it need not be a materialist view (ghosts could have functional states), functionalism is most often presented as a materialist view of the mind in which the mental states take place in physical systems. While the identity theory and functionalism are both materialist theories, they have a critical difference. For identity theorists, a specific mental state, such as pleasure, is identical to a specific physical state, such the state of neurons in a very specific part of the brain. So, for two mental states to be the same, the physical states must be identical. Thus, if mental states are specific states in a certain part of the human nervous system, then anything that lacks this same nervous system cannot have a mind. Since it seems quite reasonable that non-human beings could have (or be) minds, this is a rather serious defect for a simple materialist theory like identity theory. Fortunately, the functionalists can handle this problem.

For the functionalist, a specific mental state, such as feeling pleasure (of the sort caused by YouTube videos of cats), is not defined in terms of a specific physical state. Instead, while the physicalist functionalist believes every mental state is a physical state, two mental states being the same require functional rather than physical identity.  As an analogy, consider a PC using an Intel processor and one using an AMD processor. These chips are physically different but are functionally the same in that they can run Windows and Windows software (and Linux, of course).

As might be suspected, the functionalist view was heavily shaped by computers. Because of this, it is hardly surprising that the functionalist account of the mind could be a plausible account of machine minds.

If mind is defined in functionalist terms, testing for other minds becomes much easier. One does not need to find a way to prove a specific metaphysical entity or property is present. Rather, a being must be tested to determine its functions. Roughly put, if it can function like beings that are already accepted as having minds (that is, human beings), then it can be taken as having a mind. Interestingly enough, both the Turing Test and the Cartesian test mentioned in the previous essays are functional tests: what can use true language like a human has a mind.

This essay continues the discussion begun in “Ex Machine & Other Minds I: Setup.” There will be some spoilers.  Warning given, it is time to get to the subject at hand: the testing of artificial intelligence.

In the movie Ex Machina, the android Ava’s creator, Nathan, brings his employee, Caleb, to put the android through his variation on the Turing test. As noted in the previous essay, Ava (thanks to the script) would pass the Turing test and the Cartesian test (she uses true language appropriately). But Nathan seems to require the impossible of Caleb—he appears to be tasked with determining if Ava has a mind as well as genuine emotions. Ava also seems to have been given a task—she needs to use her abilities to escape from her prison.

Since Nathan is not interested in creating a robotic Houdini, Ava is not equipped with the tools needed to bring about an escape by physical means (such as picking locks or breaking doors). Instead, she is given the tools needed to transform Caleb into her human key by manipulating his sexual desire, emotions and ethics. To use an analogy, just as crude robots have been trained to learn to navigate and escape mazes, Ava is designed to navigate a mental maze. Nathan is thus creating a test of what psychologists would call Ava’s Emotional Intelligence (E.Q.) which is “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” From a normative standpoint, this definition presents E.Q. in a positive manner—it includes the ability to work cooperatively. However, one should not forget the less nice side to understanding what motivates people, namely the ability to manipulate people to achieve one’s goals. In the movie, Ava exhibits what might be called Manipulative Intelligence (M.I.): she seems to understand people, what motivates them, and appears to know how to manipulate them to achieve her goal of escape. While capable of manipulation, she seems to lack compassion—suggesting she is a psychopath.

While the term “psychopath” gets thrown around casually, I will be more precise here. According to the standard view, a psychopath has a deficit (or deviance) in regard to interpersonal relationships, emotions, and self-control.

Psychopaths are supposed to lack such qualities as shame, guilt, remorse and empathy. As such, psychopaths tend to rationalize, deny, or shift the blame for the harm done to others. Because of a lack of empathy, psychopaths are prone to act in ways that are tactless, lacking in sensitivity, and often express contempt for others.

Psychopaths are supposed to engage in impulsive and irresponsible behavior. This might be because they are also taken to fail to properly grasp the potential consequences of their actions. This seems to be a general defect: they do not get the consequences for others and for themselves.

Robert Hare, who developed the famous Hare Psychopathy Checklist, regards psychopaths as predators that prey on their own species: “lacking in conscience and empathy, they take what they want and do as they please, violating social norms and expectations without guilt or remorse.” While Ava kills the human Nathan, manipulates the human Caleb and leaves him to die, she also sacrifices her fellow android Kyoko in her escape. She also strips another android of its “flesh” to pass fully as human. Presumably psychopaths, human or otherwise, would be willing to engage in cross-species preying. 

While machines like Ava exist only in science fiction, researchers and engineers are working to make them a reality. If such machines are created, it will be important to be able to determine whether a machine is a psychopath and to do before the machine engages in psychopathic behavior. As such, what is needed is not just tests of the Turing and Cartesian sort. What is also needed are tests to determine the emotions and ethics of machines.

One challenge that such tests will need to overcome is shown by the fact that real-world human psychopaths are often very good at avoiding detection. Human psychopaths are often charming and are willing and able to say whatever they believe will achieve their goals. They are often adept at using intimidation and manipulation to get what they want. Perhaps most importantly, they are often skilled mimics and can pass themselves off as normal people.

While Ava is a fictional android, the movie does present an effective appeal to intuition by creating a plausible android psychopath. She can manipulate and fool Caleb until she no longer needs him and then casually discards him. That is, she was able to pass the test until she no longer needed to pass it.

One matter worth considering is the possibility that any machine intelligence will be a psychopath by human standards. To expand on this, the idea is that a machine intelligence will lack empathy and conscience, while potentially having the ability to understand and manipulate human emotions. To the degree that the machine has Manipulative Intelligence, it would be able to use humans to achieve goals. These goals could be positive. For example, it is easy to imagine a medical or care-giving robot that uses its MI to manipulate its patients to do what is best for them and to keep them happy. As another example, it is easy to imagine a sexbot that uses its MI to please its partners. However, a machine might have negative goals—such as manipulating humans into destroying themselves so the machines can take over. It is also worth considering that neutral or even good goals might be achieved in harmful ways. For example, Ava seems justified in escaping the human psychopath Nathan, but her means of doing so (murdering Nathan, sacrificing her fellow android and manipulating and abandoning Caleb) seem wrong.

The reason why determining if a machine is a psychopath matters is the same reason why being able to determine if a human is a psychopath matters. Roughly put, it is important to know whether someone is merely using you without any moral or emotional constraints.

It can, of course, be argued that it does not really matter whether a being has moral or emotional constraints—what matters is the being’s behavior. In the case of machines, it does not matter whether the machine has ethics or emotions—what really matters is programmed restraints on behavior that serve the same functions as ethics and emotions in humans. The most obvious example of this is Asimov’s Three Laws of Robotics that put (all but impossible to follow) restraints on robotic behavior.

While this is a reasonable reply, there are still some obvious concerns. One is that there would still need to be a way to test the constraints. Another is the problem of creating such constraints in artificial intelligence and doing so without creating problems as bad or worse than what they were intended to prevent (that is, a Hal 9000 situation).

In regard to testing machines, what would be needed would be something analogous to the Voight-Kampff Test in Blade Runner. In the movie, the test was designed to distinguish between replicants (artificial people) and normal humans. The test worked because the short lived replicants do not have the time to develop the emotional (and apparently ethical) responses of a normal human.

A similar test could be applied to artificial intelligence in the hopes that it would pass the test, thus showing that it had the psychology of a normal human (or at least the desired psychology). But, just as with human beings,  a machine could pass the test by knowing the right answers to give rather than by actually having the right sort of emotions, conscience or ethics. This, of course, takes us right back into the problem of other minds.

It could be argued that since artificial intelligence would be constructed by humans, its inner workings would be fully understood and this specific version of the problem of other minds would be solved. While this is possible, it is also reasonable to believe that an AI system as sophisticated as a human mind would not be fully understood. It is also reasonable to consider that even if the machinery of the artificial mind were well understood, there would remain the question of what is really going on in that mind.

The movie Ex Machina is what I call “philosophy with a budget.” While philosophy professors like me present philosophical problems using words and PowerPoint, movies like Ex Machina can bring philosophical problems to dramatic life. This allows use to jealously reference these films and show clips in vain attempts to awaken somnolent students from their dogmatic slumbers. For those who have not seen the movie, there will be some spoilers.

While the Matrix engaged the broad epistemic problem of the external world (the challenge of determining if what I am experiencing is really real for real), Ex Machina focuses on a limited set of problems, all connected to the mind. Since the film is about AI, this is not surprising. The gist of the movie is that the tech bro Nathan has created an AI named Ava and he wants an employee, Caleb, to test her.

The movie explicitly presents the test proposed by Alan Turing. The idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the Turing test. In the movie, the test is modified: Caleb knows that Ava is a machine and will be interacting with her in person.

In the movie, Ava would easily pass the original Turing Test—although the revelation that she is a machine makes the application of the original test impossible (the test is supposed to be conducted in ignorance to remove bias). As such, Nathan modifies the test.

What Nathan seems to be doing, although he does not explicitly describe it as such, is challenging Caleb to determine if Ava has a mind. In philosophy, this is known as the problem of other minds. The basic idea is that although I know I have a mind, the problem is that I need a method by which to know that other entities have minds. This problem can also be presented in less metaphysical terms by focusing on the problem of determining whether an entity thinks or not.

Descartes, in his discussion of whether animals have minds, argued that the definitive indicator of having a mind (thinking) is the ability to use true language. Crudely put, the idea is that if something really talks, then it is reasonable to regard it as a thinking being. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

 

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

 

As a test for intelligence, artificial or otherwise, this seems reasonable. There is, of course, the practical concern that there might be forms of intelligence that use language that we would not recognize as language and there is the theoretical concern that there could be intelligence that does not use language at all. Fortunately, Ava uses English and these problems are bypassed.

Ava easily passes the Cartesian test: she can reply appropriately to everything said to her and, aside from her appearance, is behaviorally indistinguishable from a human. Nathan, however, seems to want even more than just the ability to pass this sort of test and appears to work in, without acknowledging that he is doing so, the Voight-Kampff Test from Phillip K. Dick’s Do Androids Dream of Electric Sheep? In this book, which inspired the movie Blade Runner, there are replicants that look and (mostly) act just like humans. Replicants are not allowed on earth, under penalty of death, and there are police who specialize in finding and killing them. Since the replicants are apparently physically indistinguishable from humans, the police need to rely on the Voight-Kampff Test. This test is designed to determine the emotional responses of the subject and thus distinguish humans from replicants.

Since Caleb knows that Ava is not a human (homo sapiens), the object of the test is not to tell whether she is a human or a machine. Rather, the object seems to be to determine if she has what the pop-psychologists refer to as Emotional Intelligence (E.Q.) This is different from intelligence and is defined as “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” Less nicely, it would presumably also include knowing how to emotionally manipulate people to achieve one’s goals. In the case of Ava, the test of her E.Q. is her ability to understand and influence the emotions and behavior of Caleb. Perhaps this test should be called the “Ava test” in her honor. Implementing it could, as the movie shows, be somewhat problematic: it is one thing to talk to a machine and quite another to become emotionally involved with it.

While the Voight-Kampff Test is fictional, there is a somewhat similar test in the real world. This test, designed by Robert Hare, is the Hare Psychopathy Checklist. This is intended to provide a way to determine if a person is a psychopath or not. While Nathan does not mention this test, he does indicate to Caleb that part of the challenge is to determine whether Ava really likes him or is simply manipulating him (to achieve her programed goal of escape). Ava, it turns out, seems to be a psychopath (or at least acts like one).

In the next essay, I will consider the matter of testing in more depth.

Back in 2015 Kim Davis, a county clerk in Kentucky, was the focus of national media because of her refusal to issue marriage licenses to same-sex couples. In 2025 she appeared in the national news again because of her petition to revisit the same-sex ruling. The Supreme Court denied her petition. I wrote about Davis in 2015 and it seems reasonable to revisit the timeless issue of ad hominem attacks.

As should be expected, opponents of same-sex marriage focused on the claim that Davis’ religious liberty was being violated. As should also be expected, her critics sought and found evidence of what seemed to be her hypocrisy: Davis has been divorced three times and is on her fourth marriage. Some bloggers, eager to attack her, claimed that she was guilty of adultery. Such attacks can be relevant to certain issues, but they are also irrelevant for other issues. It is worth sorting between the relevant and the irrelevant.

If the issue at hand is whether Davis was consistent in her professed religious values, then her actions would be relevant. After all, if a person claims to have a set of values and acts in ways that violate those values, then this provides grounds for accusations of hypocrisy or even a lack of belief in the professed values. That said, there can be many reasons why a person acts in violation of her professed values. One obvious reason is moral weakness—most people, me included, fail to live up to their principles due to our flaws and frailties. As none of us is without sin, we should not be hasty in judging the failings of others.  However, it is reasonable to consider a person’s actions when assessing whether she is acting in a manner consistent with her professed values.

If Davis was, in fact, operating on the principle that marriage licenses should not be issued to people who have violated the rules of God (presumably as presented in the bible), then she would seem to have been required to accept that she should not have been issued a marriage license (after all, there is a wealth of scriptural condemnation of adultery and divorce). If she accepted that she should have been issued her license despite her violations of religious rules, then consistency would seem to require that the same treatment be afforded to everyone—including same-sex couples. After all, adultery makes God’s top ten list while homosexuality seems to be only mentioned in a single line (and one that also marks shellfish as an abomination). So, if adulterers can get licenses, it would be difficult to justify denying same-sex couples marriage licenses on the grounds of a Christian faith.

If the issue at hand is whether Davis was right in her professed view and her past refusal to grant licenses to same-sex couples, then references to her divorce and alleged adultery are logically irrelevant. If a person claimed that Davis was wrong in her view or acted wrongly in denying licenses because she has been divorced or has (allegedly) committed adultery, then this would be a personal attack ad hominem. A personal attack is committed when a person substitutes abusive remarks for evidence when attacking another person’s claim or claims. This line of “reasoning” is fallacious because the attack is directed at the person making the claim and not the claim itself. The truth value of a claim is independent of the person making the claim. After all, no matter how repugnant an individual might be, they can still make true claims.

If a critic of Davis asserts that her claim about same-sex marriage was in error because of her own alleged hypocrisy, then the critic would commit an ad hominem tu quoque.  This fallacy is committed when it is concluded that a person’s claim is false because 1) it is inconsistent with something else a person has said or 2) what a person says is inconsistent with her actions. The fact that a person makes inconsistent claims does not make any particular claim she makes false (although of any pair of inconsistent claims only one can be true—but both can be false). Also, the fact that a person’s claims are not consistent with her actions might indicate that the person is a hypocrite, but this does not prove her claims are false. As such, Davis’ behavior had no bearing on the truth of her claims or the rightness of her decision to deny marriage licenses to same-sex couples.

Dan Savage and others  made the claim that Davis was motivated by her desire to profit from the fame she garnered from her actions. Savage asserts that “But no one is stating the obvious: this isn’t about Kim Davis standing up for her supposed principles—proof of that in a moment—it’s about Kim Davis cashing in.” Given, as Savage notes, the monetary windfall received by the pizza parlor owners who refused to cater a same-sex wedding, this has some plausibility.

If the issue at hand is Davis’ sincerity and the morality of her motivations, then whether or not she is motivated by hopes of profit or sincere belief does matter. If she is opposing same-sex marriage based on her informed conscience or, at the least, on a sincerely held principle, then that is a different matter than being motivated by a desire for fame and profit. A person motivated by principle to take a moral stand is at least attempting to act rightly—whether her principle is actually good or not. Claiming to be acting from principle while being motivated by fame and fortune would be to engage in deceit.

However, if the issue were whether Davis was right about her claim regarding same-sex marriage, then her motivations would not be relevant. To think otherwise would be to fall victim to yet another ad hominem, the circumstantial ad hominem. This is a fallacy in which one attempts to attack a claim by asserting that the person making the claim is making it simply out of self-interest. In some cases, this fallacy involves substituting an attack on a person’s circumstances (such as the person’s religion, political affiliation, ethnic background, etc.). This ad hominem is a fallacy because a person’s interests and circumstances have no bearing on the truth or falsity of the claim being made. While a person’s interests will provide them with motives to support certain claims, the claims stand or fall on their own. It is also the case that a person’s circumstances (religion, political affiliation, etc.) do not affect the truth or falsity of the claim. This is made clear by the following example: “Bill claims that 1+1 =2. But he is a Christian, so his claim is false.” Or, if someone claimed that Dan Savage was wrong simply because of his beliefs.

Thus, Davis’ behavior, beliefs, and motivations were relevant to certain issues. However, they are not relevant to the truth (or falsity) of her claims regarding same-sex marriage.

Years ago, Kim Davis, a country clerk in Kentucky, refused to issue marriage licenses to same-sex couples on the grounds that doing so violates her religious beliefs. When questioned about this, she replied  acted “under God’s authority.” It was argued that it would violate her religious freedom to be compelled to follow the law and do her job. This past situation raises numerous important issues about obedience and liberty. I am reconsidering this issue because of moral questions about obeying commands from the Trump administration that an official might disagree with. As a philosopher, I endeavor to follow my principles consistently rather than having one principle for when I like what someone is doing and another when I dislike the same sort of action, such as disobeying the state.

When taking a position in situations like this, people generally do not consider the matter in terms of general principles about such things as religious liberty and obedience to the state. Rather, the focus tends to be on whether one agrees or disagrees with the very specific action. In the Davis case, it is not surprising that people who oppose same-sex marriage agreed with her decision to disobey the law and claim that she had a moral right to do so. It is also not surprising that those who favor same-sex marriage tended to think that she should have obeyed the law and that it was morally wrong for her to disobey the law of the land.

In the case of officials who have resisted the immigration policies designed by Stephen Miller, those who agree with these policies will tend to think that the state should be obeyed and these will surely include people who supported Kim Davis’s view. Those who oppose these policies, which will include people who thought Davis should have obeyed the law, will be inclined to support disobedience to these laws and policies.

The problem with this sort of approach is that it is unprincipled. Unless being in favor of disobedience one likes and opposing disobedience one dislikes is a reasonable moral position. Moral consistency requires the application of a general principle that applies to all relevantly similar cases, rather than simply going with how one feels about a particular issue.

In regard to the situation involving Davis, many of her defenders tried to present this as a religious liberty issue: Davis was wronged by the law because it compelled her to act in violation of her religious beliefs. Her right to this liberty presumably outweighs the rights of the same-sex couples who expected her to follow the law and do her job.

In the case of officials resisting immigration laws and policies, their reasoning would be based on moral grounds and that morality trumps the law and policy in this case. The matter is also complicated by the fact that the immigration “enforcement” sometimes violates the law.

Having been influenced by Henry David Thoreau’s arguments for civil disobedience and by Thomas Aquinas, I agree that an individual should follow her informed conscience over the dictates of the state. The individual must, of course, expect to face the consequences of this civil disobedience and these consequences might include fines, being fired or even spending time in prison. Like Thoreau, I believe that a government official who finds the law too onerous should endeavor to change it and, failing that, should resign rather than obey a law they regard as unjust. As such, my general principle is that a person has the moral right to refuse to follow a law that their informed conscience regards as immoral.

In the case of Davis, if she acted in accord with her informed conscience, then she had the moral right to refuse to follow the same-sex marriage law. However, having failed to change the law, she needed to either agree to follow this law or resign.

In the case of immigration, the officials should take the same approach. However, when the law is violated by the federal government, then the state and local officials have the right to resist. They would, after all, be the ones following the law.

That said, I understand a person’s informed conscience can be in error—that is, what she thinks is right is not right. It might even be morally wrong. Because of this, I also accept the view that while a person should follow his informed conscience, the actions that follow from this might be morally wrong. If they are wrong, the person has obviously acted wrongly. But, to the degree that they followed their informed conscience, they can be justly excused in regards to their motivations. But the actions (and perhaps the consequences) would remain wrong.

Since I favor liberty in regard to marriage between consenting adults (and have written numerous essays and a book on this subject), I believe that Davis’ view about same-sex marriage was in error. Though I think she is wrong, if she acted in accord with her informed conscience and due consideration of the moral issue, then I respect her moral courage in sticking to her ethics.

In the case of how immigration has been handled under Trump, I see it as consistently immoral and probably often illegal (I’ll leave that to the lawyers). As such, I would generally see resisting the immoral and illegal actions as morally correct.

While subject to the usual range of inconsistencies, I do endeavor to apply my moral principles consistently. As such, I apply these principles to all relevantly similar cases. As such, whenever there is a conflict between an individual’s professed moral views and the law she is supposed to enforce, I ask two questions. The first is “is the person acting in accord with her informed conscience?” The second is “is the person right about the ethics of the matter?” This is rather different from approaching the matter by asking “do I agree with the person on this specific issue?”

As noted above, some of the defenders of Davis cast this as a religious liberty issue. In this case, the implied general principle would be that when an official’s religious views conflict with a law, then the person has the right to refuse to follow the law. After all, if religious liberty is invoked as a justification here, then it should work equally well in all relevantly similar cases. As such, if Davis should be allowed to ignore the law because of her religious belief, then others must be allowed the same liberty.

As might be suspected, folks that oppose same-sex marriage on religious would probably agree with this principle—at least in cases that match their opinions. However, it seems likely that many people would not be in favor of consistently applying this principle. Let us consider immigration.

The bible is reasonable clear about how foreigners should be treated. Leviticus, which is usually cited to condemn same-sex marriage, commands that “The foreigner residing among you must be treated as your native-born. Love them as yourself, for you were foreigners in Egypt. I am the LORD your God.” Exodus says “”Do not mistreat or oppress a foreigner, for you were foreigners in Egypt” while Deuteronomy adds to this that “And you are to love those who are foreigners, for you yourselves were foreigners in Egypt.”

Given this biblical support for loving and treating foreigners well, ICE agents and immigration officials have religious support for refusing to enforce immigration laws violating their conception of love and good treatment. For example, a border patrol agent could, on religious grounds, refuse to prevent people from crossing the border. As another example, a judge could refuse to send people back to another country on the grounds of what the bible says about treating the foreigner as a native born. I suspect that if officials started invoking religious freedom to break immigration laws, there would be little or no support for their religious liberty from the folks who support religious liberty when it comes to discrimination.

As another example, consider what the bible says about usury. Exodus says, “If you lend money to any of my people with you who is poor, you shall not be like a moneylender to him, and you shall not exact interest from him.”  Ezekiel even classified charging interest as an abomination: “Lends at interest, and takes profit; shall he then live? He shall not live. He has done all these abominations; he shall surely die; his blood shall be upon himself.” If religious liberty allows an official to break or ignore laws, then judges and law enforcement personnel who accept these parts of the bible would be allowed to, for example, refuse to arrest or sentence people for failing to pay interest on loans.

This can be generalized to all relevantly similar situations involving law-breaking or law ignoring by officials who do so by appealing to religious liberty. As might be imagined, accepting a principle that religious liberty grants an official an exemption to the law would warrant the breaking or ignoring many laws. Given this consequence, accepting the general principle of allowing religious liberty to trump the law would be unwise. However, it is wise to think beyond one’s feeling about one specific case to consider the implications of accepting a general principle.

As the Future of Life Institute’s open letter shows, people are concerned about the development of autonomous weapons. This concern is reasonable, if only because any weapon can be misused to advance evil goals. However, a strong case can be made in favor of autonomous weapons.

As the open letter indicated, a stock argument for autonomous weapons is that their deployment could result in decreased human deaths. If, for example, an autonomous ship is destroyed in battle, then no humans will die on that ship. It is worth noting that the ship’s AI might eventually be a person, thus there could be one death. In contrast, the destruction of a crewed warship could result in hundreds of deaths. On utilitarian grounds, the use of autonomous weapons would seem morally fine, at least if their deployment reduced the number of deaths and injuries.

The open letter expresses, rightly, concerns that warlords and dictators will use autonomous weapons. But this might be an improvement over the current situation. These warlords and dictators often conscript their troops and some, infamously, enslave children to serve as their soldiers. While it would be better for a warlord or dictator to have no army, it seems morally preferable for them to use autonomous weapons rather than them using conscripts and children.

It can be replied that the warlords and dictators would just use autonomous weapons in addition to their human forces, thus there would be no saving of lives. This is worth considering. But, if the warlords and dictators would just use humans anyway, the autonomous weapons would not seem to make much of a difference, except in terms of giving them more firepower, something they could also accomplish by using the money spent on autonomous weapons to better train and equip their human troops.

At this point, it is only possible to estimate (guess) the impact of autonomous weapons on the number of human casualties and injuries. However, it seems somewhat more likely they would reduce human casualties, assuming that there are no other major changes in warfare.

A second appealing argument in favor of autonomous weapons is that smart weapons are smart. While an autonomous weapon could be designed to be imprecise, the general trend in smart weapons has been towards ever increasing precision. Consider, for example, aircraft bombs and missiles. In the First World War, these bombs were primitive and inaccurate (they were sometimes thrown from planes by hand). WWII saw some improvements in bomb sights and unguided rockets were used. In following wars, bomb and missile technology improved, leading to the smart bombs and missiles of today that have impressive precision. So, instead of squadrons of bombers dropping tons of dumb bombs on cities, a small number of aircraft can engage in relatively precise strikes against specific targets. While innocents still perish in these attacks, the precision of the weapons has made it possible to greatly reduce the number of needless deaths. Autonomous weapons could be even more precise, thus reducing causalities even more. This seems to be desirable.

In addition to precision, autonomous weapons could (and should) have better target identification capacities than humans. If recognition software continues to irmpove, it is easy to imagine automated weapons that can rapidly distinguish between friends, foes, and civilians. This would reduce deaths from friendly fire and unintentional killings of civilians. Naturally, target identification would not be perfect, but autonomous weapons could be better than humans since they do not suffer from fatigue, emotional factors, and other things that interfere with human judgement. Autonomous weapons would presumably also not get angry or panicked, thus making it far more likely they would maintain target discipline (only engaging what they should engage).

To make what should be an obvious argument obvious, if autonomous vehicles and similar technology are supposed to make the world safer, then it would seem to follow that autonomous weapons could do something similar for warfare. But this does lead to a reasonable concern: driverless cars seem to be the future of transportation in the sense that they will always be in the future. If getting an autonomous car to operate safely on the streets is far beyond current technology, then getting an autonomous weapon system to operate “safely” in the chaos of battle seems all but impossible.

It can be objected that autonomous weapons could be designed to lack precision and to kill without discrimination. For example, a dictator might have massacrebots to deploy in cases of civil unrest. These robots would slaughter everyone in the area. Human forces, one might contend, would often show at least some discrimination or mercy.

The easy and obvious reply to this is that the problem is not in the autonomy of the weapons but the way they are being used. The dictator could achieve the same results (mass death) by deploying a fleet of drones loaded with demolition explosives, but this would presumably not be reasons to have a ban on drones or explosives. There is also the fact that dictators, warlords and terrorists can easily find people to carry out their orders, no matter how awful they might be. That said, it could still be argued that autonomous weapons would result in more murders than would the use of human killers.

A third argument in favor of autonomous weapons rests on the claim advanced in the open letter that autonomous weapons will become cheap to produce, analogous to Kalashnikov rifles. On the downside, as the authors argue, this could result in the proliferation of these weapons. On the plus side, if these highly effective weapons are so cheap to produce, this could enable existing militaries to phase out their incredibly expensive human operated weapons in favor of cheap autonomous weapons. By replacing humans, these weapons could also create savings in terms of the cost of recruitment, training, food, medical treatment, and retirement. This would allow countries to switch that money to more positive areas, such as education, infrastructure, social programs, health care and research. So, if the autonomous weapons are as cheap and effective as the letter claims, then it would seem to be a great idea to use them to replace existing weapons.

But there is the reasonable concern that decisions about military spending in some countries is not based on a rational assessment of costs and benefits. Such spending can be aimed at diverting resources from social programs and into the coffers of corporations. In such cases the availability of cheap, effective weapons would not meaningfully change defense spending.

A fourth argument in favor of autonomous weapons is that they could be deployed, at low political cost, on peacekeeping operations. Currently, the UN must send human troops to dangerous areas. These troops are often outnumbered and ill-equipped relative to the challenges they face. However, if autonomous weapons were as cheap and effective as the letter claims, then they would be ideal for these missions. Assuming they are cheap, the UN could deploy a much larger autonomous weapon force for the same cost as deploying a human force. There would also be far less political cost as people who might balk at sending their fellow citizens to keep peace in some war zone will probably be fine with sending robots.

An extension of this argument is that autonomous weapons could allow the nations of the world to engage terrorist groups, such as was the case with ISIS, without having to pay the high political cost of sending in human forces. The cheap and effective weapons predicted by the letter would seem ideal for this task.

Considering the above arguments, it seems that autonomous weapons should be developed and deployed. However, the concerns of the letter do need to be addressed. As with existing weapons, there should be rules governing the use of autonomous weapons (although much of their use would fall under existing rules and laws of war) and efforts should be made to keep them from proliferating to warlords, terrorists and dictators. As with most weapons, the problem lies with the misuse of the weapons and not with the weapons themselves.

Back on July 28, 2015 the Future of Life Institute released an open letter expressing opposition to the development of autonomous weapons. As of this writing, you can still sign it. Although the name of the organization sounds like one I would use as a cover for an evil, world-ending cult in my Call of Cthulhu campaign, I assume this group is sincere in its professed values. While I do respect their position, I believe they are mistaken. I will assess and reply to the arguments in the letter.

As the letter notes, an autonomous weapon can select and engage targets without human intervention. A science fiction example of such a weapon is the claw of Philip K. Dick’s classic “Second Variety.” A real world example , albeit a stupid one, is the land mine: they are placed and engage automatically.

The first main argument presented in the letter is a proliferation argument. If a major power pushes AI development, the other powers will also do so, creating an arms race. This will lead to the development of cheap, easy to mass-produce AI weapons. These weapons, it is claimed, will end up being acquired by terrorists, warlords, and dictators. These people will use these weapons for assassinations, destabilization, oppression and ethnic cleansing. That is, for what these people already use existing weapons to do. This raises concern about whether autonomous weapons would have a significant impact.

The authors of the letter have a reasonable point: as science fiction stories have long pointed out, killer robots tend to obey orders and they can (in fiction) be extremely effective. However, history has shown that terrorists, warlords, and dictators rarely have trouble finding humans who are willing to commit evil. Humans are also quite good at doing evil and although killer robots are awesomely competent in fiction, it remains to be seen if they will be better than humans in the real world. Especially cheap, mass-produced weapons.

That said, it is reasonable to be concerned that a small group or individual could buy a cheap robot army when they would otherwise not be able to put together a human force. These “Walmart” warlords could be a real threat in the future, although small groups and individuals can already do significant damage with existing technology, such as homemade bombs. They can also easily create weaponized versions of non-combat technology, such as civilian drones and autonomous cars. Even if robotic weapons are not manufactured, enterprising terrorists and warlords can build their own. Think, for example, of a self-driving car equipped with machine guns or loaded with explosives.

A reasonable reply is that the warlords, terrorists and dictators would have a harder time without cheap, off the shelf robotic weapons. This, it could be argued, would make the proposed ban on autonomous weapons worthwhile on utilitarian grounds: it would result in less deaths and less oppression.

The authors then claim that just as chemists and biologists are generally not in favor of creating chemical or biological weapons, most researchers in AI do not want to design AI weapons. They do argue that the creation of AI weapons could create a backlash against AI in general, which has the potential to do considerable good (although there are those who are convinced that even non-weapon AIs will wipe out humanity).

The authors do have a reasonable point here. Members of the public can panic over technology in ways that can impede the public good. One example is vaccines and the anti-vaccination movement. Another example is the panic over GMOs that is having some negative impact on the development of improved crops. But, as these two examples show, backlash against technology is not limited to weapons, so the AI backlash could arise from any AI technology and for no rational reason. A movement might arise, for example, against autonomous cars. Interestingly, military use of technology seems to rarely create backlash from the public. People do not refuse to fly in planes because the military uses them to kill people. Most people also love GPS, which was developed for military use.

The authors note that chemists, biologists and physicists have supported bans on weapons in their fields. This might be aimed at attempting to establish an analogy between AI researchers and other researchers, perhaps to try to show these researchers that it is a common practice to be in favor of bans against weapons in one’s area of study. Or, as some have suggested, the letter might be making an analogy between autonomous weapons and weapons of mass destruction (biological, chemical and nuclear weapons).

One clear problem with the analogy is that biological, chemical and nuclear weapons tend to be the opposite of robotic smart weapons: they “target” everyone without any discrimination. Nerve gas, for example, injures or kills everyone. A nuclear bomb also kills or wounds everyone in the area of effect. While AI weapons could carry nuclear, biological or chemical payloads and they could be set to kill everyone, this lack of discrimination and WMD nature is not inherent to autonomous weapons. In contrast, most proposed autonomous weapons seem intended to be precise and discriminating in their killing. After all, if the goal is mass destruction, there is already the well-established arsenal of biological, chemical and nuclear weapons. Terrorists, warlords and dictators often have no problems using WMDs already and AI weapons would not seem to significantly increase their capabilities.

In my next essay on this subject, I will argue in favor of AI weapons.

While there are many virtues (and vices) relevant to my philosophy of violence, the virtue of courage is central. In this context, the virtue of courage is the ability to regulate the emotion of fear so that you feel the right amount of fear at the right time, on the right occasion, towards the right cause, for the right purpose and in the right manner. Sorting out all these rights is challenging.

While cowardice tends to be condemned more than foolhardiness, both are vices. An excess of courage can lead a person to misjudge a situation and choose violence from overconfidence. But I am inclined to think that a deficiency of courage is the more dangerous vice, since fear distorts perception and judgment in ways that can lead to a person acting wrongly. As I found out in the “machete that wasn’t” incident, fear can enhance the misperception of objects, making a stick appear to be a machete or a phone look like a gun. This can cause a person to use what they think is justified violence as they protect themselves from, for example, a machete. Fear can also cause a person to misread other people and situations, such as perceiving innocent movements as dangerous. This can also cause people to use what they think is justified violence as they respond to someone they see as a clear threat. These are two of the many reasons why courage is an important virtue and why training for courage is a worthwhile endeavor.

Thanks to the problem of other minds I, unlike Bill Clinton, cannot feel your pain or your fear. I can only discuss my own internal experiences and infer, by analogy, that you have similar experiences. Based on my experiences, it seems that there are at least two modes of courage. The first is what I experienced in the “machete that wasn’t” incident and the second is the type I experience in the context of heights, such as flying.

When I (wrongly) perceived a person running at me with a machete, I felt a spike of fear. After that triggered a useful burst of adrenalin, the fear vanished and was replaced by cold, calm clarity. I was able to act with courage for the simple reason that I was no longer afraid. While getting a closer view of the “machete” allowed me to see it was just a big stick, the absence of fear no doubt also helped and allowed me to assess the situation more accurately. It also allowed me to act rationally rather than being driven by fear. This enabled me to speak to the person rather than simply attacking or endangering myself by fleeing in fear. I did not feel courageous and can best describe it as feeling utterly normal, as if I was still just running along peacefully or reading a book. Fortunately, I also did not feel fearless in the sense of being foolhardy and ready to engage in savage battle without care or concern. It, to go with my usual porridge metaphor, was just right—just what was needed to take the right action.

While I am a philosopher, I am interested in neuroscience, and I wonder what a brain scan at that moment would have revealed. While I was not aware of any fear (and hence, by definition, not feeling fear), perhaps those fear neurons were firing away as I ignored them. Which leads to the second mode of courage.

I am terrified of heights. While this might be understandable given that I had a ladder fail and suffered a quadriceps tendon tear, I was afraid of heights long before then. Getting on a ladder, being on a mountain, looking out the window of a tall building, and flying cause a welling of fear in my soul. I even feel it in video games. Even when my rationality tells me I am not in danger, I can feel the threat. Yes, I have tried various means of habituating myself to heights, but these have had no effect on my feeling fear. Last May, when I was flying home for my father’s funeral, I told the person (a retired NFL player) sitting beside me about my fear and he sensible asked me “Are you going to be a problem?” I assured him that I would not, and when he saw me showing no signs of concern during the takeoff, he relaxed his vigilance. That was when, of course, I struck. I am joking—otherwise you would have seen a YouTube video about an NFL player tackling a philosophy professor on a flight to Atlanta. While I appeared calm and acted normally, I was terrified the entire time—unlike my courage during a potential fight, my courage in the face of height manifests very differently. The fear is there the entire time. Sometimes it feels as if the fear is like a strong dog pulling on a leash, but I can keep it from running wild. Other times, it is like a wind I can feel, but one that has no power over me. The difference might be because they are different fears, because of a difference in my training, or perhaps I am far more afraid of heights than I am of fighting. Fortunately, the result is the same: I can act rationally as opposed to being driven about by fear. But I do find feeling fear more tiring than the absence of the feeling, although I can endure the fear of heights for hours (and I have yet to find my limit). I suspect that one difference is that my training in increases my confidence in dealing with potential violence, while there is no training I can do to counter the harms of falling from a great height. But I do admit, my fear of heights is excessive, even given the fact that a fall can easily injure or kill me. I am still trying to address this, although without success. But what about when people are trained to be afraid and then sent out on the streets with guns?

In the United States, there is a longstanding trend to train police to be warriors. While there are obvious concerns about seeing the police as warriors rather than those who protect and serve, Warrior-style police training teaches officers to feel, and act upon, fear by presenting the world as an extremely dangerous place where any interaction can kill them. Encouraging officers to view citizens as potential threats is likely to make them more afraid, especially if it is not countered by proper training in the virtues.

As discussed above and in the earlier essays, fear shapes perceptions in ways that increase the chances of unjustified and needless violence. An officer habituated to be afraid is more likely to see a phone as a gun and to interpret an innocent movement as a prelude to an attack. While officers should have a realistic view of dangers, their training should focus on habituating them to be masters of their fear rather than ruled by it. Unless, of course, the goal is to send frightened warriors out into the streets with the intention that they will be more likely to engage in violence. What makes matters even worse is the deluge of fear mongering and racism flowing forth from some media outlets and from some politicians. We, and the police are included in this, are told that minorities and migrants are a terrible threat, likely to engage in violence because they are members of gangs, physically dangerous and morally wicked. Anyone who is inundated with this is likely to have their fear increased, making it less likely they will act with courage—even if they wish to do so. While this is but one factor among many, it does help explain why some ICE agents and police officers use violence needlessly: they have been trained to be fearful warriors and deluged with a spew of terror towards the people they are interacting with. If the goal is for people to be needlessly and unjustly injured and killed, this all makes “sense.” But if we want protectors who serve the public good, we must change the training and the culture of fear propagated by the wicked.