As a gamer and horror fan I have an undying fondness for zombies. Years ago, I was intrigued by tales of philosophical zombies—I had momentary hope my fellow philosophers were doing something interesting. But, as is often the case, professional philosophers sucked the life out of the already lifeless. Unlike proper flesh devouring creations of necromancy or mad science, philosophical zombies are dull creatures.

Philosophical zombies look and act like normal humans but lack consciousness. They are no more inclined to seek the brains of humans than standard humans. Rather than causing the horror proper to zombies, philosophical zombies bring about a feeling of vague disappointment. This is the same sort of disappointment that readers in my age range might recall from childhood trick or treating when someone gave you pennies or an apple rather than candy.

Rather than serving as minions for necromancers or metaphors for vacuous and excessive American consumerism, philosophical zombies serve as victims in philosophical discussions about the mind and consciousness.

The dullness of current philosophical zombies does raise an important question—is it possible to have a philosophical discussion about proper zombies? There is also a second and equally important question—is it possible to have an interesting philosophical discussion about proper zombies? As I will show, the answers are “yes” and “obviously not.”

Since there is, at least in this world, no Bureau of Zombie Standards, there are many varieties of zombies. In my games and fiction, I generally define zombies in terms of beings that are biologically dead yet animated (or re-animated, to be more accurate). Traditionally, zombies are “mindless” or possess a very basic awareness that suffices to move about and seek victims.

In works of fiction, many beings called “zombies” do not have these qualities. The zombies in 28 Days are “mindless” but are still alive. As such, they are not really zombies—just infected people. The zombies in Return of the Living Dead are dead and re-animated but retain human intelligence. Zombie lords and juju zombies in D&D and Pathfinder are dead and re-animated but are also intelligent. In the real world, there are also what some call zombies. These are organisms taken over and controlled by another organism, such as an ant controlled by a nasty fungus. To keep the discussion focused and narrow, I will stick with what I consider proper zombies: biologically dead, yet animated. While I generally take zombies to be unintelligent, I do not consider that a definitive trait. For folks concerned about how zombies differ from other animated dead, such as vampires and ghouls, the main difference is that stock zombies lack the special powers of more luxurious undead—they only have the same basic capabilities as the living creature (mostly moving around, grabbing and biting). In contrast, vampires are usually portrayed as super-powered undead.

One key issue about zombies is whether they are possible. There are various ways to “cheat” to create zombies—for example, a mechanized skeleton could be embedded in dead flesh to move about. This would make a rather impressive horror weapon. Another option is to have a corpse driven about by another organism—wearing the body as a “meat suit.” However, these would not be proper zombies since they are not self-propelling—just being moved about by something else.

In terms of “scientific” zombies, the usual approaches include strange chemicals, viruses, funguses or other such means of animation. Since it is well-established that electrical shocks can cause dead organisms to move, getting a proper zombie would seem to be an engineering challenge—although making one work properly could require “cheating” (for example, having computerized control nodes in the body that coordinate the manipulation of the dead flesh).

A traditional means of animating corpses is via supernatural means. In games like Pathfinder, D&D and Call of Cthulhu, zombies are animated by spells (the classic being animate dead) or by an evil spirit occupying the flesh. In the D&D tradition, zombies (and all undead) are powered by negative energy (while living creatures are powered by positive energy). It is this energy that enables the dead flesh to move about (and violate the usual laws of biology).

While the idea of negative energy is mostly a matter of fantasy games, the notion of unintelligent animating forces is not unprecedented in the history of science and philosophy. For example, Aristotle seems to have considered that the soul (or perhaps a “part” of it) served to animate the body. Past thinkers also considered forces that would animate non-living bodies. As such, it is easy enough to imagine a similar sort of force that could animate a dead body (rather than returning it to life).

The magic “explanation” is the easiest approach but is not really an explanation. It seems reasonable to think that magic zombies are not possible in the actual world—though all the zombie stories and movies show it is easy to imagine possible worlds inhabited by them.

The idea of a truly dead body moving around in the real world the way fictional zombies seems implausible. After all, it seems essential to biological creatures that they be alive (to some degree) for them to move about under their own power. What would be needed is some sort of force or energy that could move truly dead tissue. While this is conceivable (in the sense that it is easy to imagine), it does not seem possible—at least in this world. Dualists might, of course, be tempted to consider that the immaterial mind could drive the dead shell—after all, this would only be marginally more mysterious than the ghost driving around a living machine. Physicalists, of course, would almost certainly balk at proper zombies—at least until the zombie apocalypse. Then they would be running.

While the problem of other minds is an epistemic matter (how does one know that another being has a mind?) there is also the metaphysical problem of determining the nature of the mind. It is often assumed that there is one answer to the metaphysical question regarding the nature of mind. However, it is certainly reasonable to keep open the possibility that there might be minds that are metaphysically very different. One area in which this might occur is in regard to machine intelligence, an example of which is Ava in the movie Ex Machina, and organic intelligence. The minds of organic beings might differ metaphysically from those of machines—or they might not.

Over the centuries philosophers have proposed various theories of mind, and it is interesting to consider which of these theories would be compatible with machine intelligence. Not surprisingly, these theories (except for functionalism) were developed to provide accounts of the minds of biological creatures.

One classic theory of mind is identity theory.  This a materialist theory of mind in which the mind is composed of matter.  What distinguished the theory from other materialist accounts of mind is that each mental state is taken as being identical to a specific state of the central nervous system. As such, the mind is equivalent to the central nervous system and its states.

If identity theory is the only correct theory of mind, then machines could not have minds (assuming they are not cyborgs with human nervous systems). This is because such machines would lack the central nervous system of a human. There could, however, be an identity theory for machine minds—in this case the machine mind would be identical to the processing system of the machine and its states. On the positive side, identity theory provides a straightforward solution to the problem of other minds: whatever has the right sort of nervous system or machinery would have a mind. But there is a negative side. Unfortunately for classic identity theory, it has been undermined by the arguments presented by Saul Kripke and David Lewis’ classic “Mad Pain & Martian Pain.” As such, it seems reasonable to reject identity theory as an account for traditional human minds as well as machine minds.

Perhaps the best-known theory of mind is substance dualism. This view, made famous by Descartes, is that there are two basic types of entities: material entities and immaterial entities. The mind is an immaterial substance that somehow controls the material substance that composes the body. For Descartes, immaterial substance thinks and material substance is unthinking and extended.

While most people are probably not familiar with Cartesian dualism, they are familiar with its popular version—the view that a mind is a non-physical thing (often called “soul”) that drives around the physical body. While this is a popular view outside of academics, it is rejected by most scientists and philosophers on the reasonable grounds that there seems to be little evidence for such a mysterious metaphysical entity. As might be suspected, the idea that a machine mind could be an immaterial entity seems even less plausible than the idea that a human mind could be an immaterial entity.

That said, if it is possible that the human mind is an immaterial substance that is somehow connected to an organic material body, then it seems equally possible that a machine mind could be an immaterial substance somehow connected to a mechanical material body. Alternatively, they could be regarded as equally implausible and hence there is no special reason to regard a machine ghost in a mechanical shell as more unlikely than a ghost in an organic shell. As such, if human minds can be immaterial substances, then so could machines minds.

In terms of the problem of other minds, there is the serious challenge of determining whether a being has an immaterial substance driving its physical shell. As it stands, there seems to be no way to prove that such a substance is present in the shell. While it might be claimed that intelligent behavior (such as passing the Cartesian or Turing test) would show the presence of a mind, it would hardly show that there is an immaterial substance present. It would first need to be established that the mind must be an immaterial substance, and this is the only means by which a being could pass these tests. It seems rather unlikely that this will be done. The other forms of dualism discussed below also suffer from this problem.

While substance dualism is the best-known form of dualism, there are other types. One other type is known as property dualism. This view does not take the mind and body to be substances. Instead, the mind is supposed to be made up of mental properties that are not identical with physical properties. For example, the property of being happy about getting a puppy could not be reduced to a particular physical property of the nervous system. Thus, the mind and body are distinct but are not different ontological substances.

Coincidentally enough, there are two main types of property dualism: epiphenomenalism and interactionism. Epiphenomenalism is the view that the relation between the mental and physical properties is one way: mental properties are caused by, but do not cause, the physical properties of the body. As such, the mind is a by-product of the physical processes of the body. The analogy I usually use to illustrate this is that of a sparkler (the lamest of fireworks): the body is like the sparkler and the sparks flying off it are like the mental properties. The sparkler causes the sparks, but the sparks do not cause the sparkler.

This view was, apparently, created to address the mind-body problem: how can the non-material mind interact with the material body? While epiphenomenalism cuts the problem in half, it still fails to solve the problem—one way causation between the material and the immaterial is fundamentally as mysterious as two-way causation. It also seems to have the defect of making mental properties unnecessary and Ockham’s razor would seem to require going with the simpler view of a physical account of the mind.

As with substance dualism, it might seem odd to imagine an epiphenomenal mind for a machine. However, it seems no more or less weird than accepting such a mind for a human being. As such, this does seem to be a possibility for a machine mind. Not a very good one, but still a possibility.

A second type of property dualism is interactionism. As the name indicates, this is the theory that mental properties can bring about changes in the physical properties of the body and vice versa. That is, interaction road is a two-way street. Like all forms of dualism, this runs into the mind-body problem. But, unlike substance dualism its does not require the much loathed metaphysical category of substance—it just requires accepting metaphysical properties. Unlike epiphenomenalism it avoids the problem of positing explicitly useless properties—although it can be argued that the distinct mental properties are not needed. This is exactly what materialists argue.

As with epiphenomenalism, it might seem odd to attribute to a machine a set of non-physical mental properties. But, as with the other forms of dualism, it is really no stranger than attributing the same to organic beings. This is, obviously, not an argument in its favor, the assertion that the view should not be dismissed from mere organic prejudice.

The final theory I will consider is the very popular functionalism. As the name suggests, this view asserts that mental states are defined in functional terms. So, a functional definition of a mental state defines the mental state in regard to its role or function in a mental system of inputs and outputs. More specifically, a mental state, such as feeling pleasure, is defined in terms of the causal relations that it holds to external influences on the body (such as a cat video on YouTube), other mental states, and the behavior of the rest of the body. 

While it need not be a materialist view (ghosts could have functional states), functionalism is most often presented as a materialist view of the mind in which the mental states take place in physical systems. While the identity theory and functionalism are both materialist theories, they have a critical difference. For identity theorists, a specific mental state, such as pleasure, is identical to a specific physical state, such the state of neurons in a very specific part of the brain. So, for two mental states to be the same, the physical states must be identical. Thus, if mental states are specific states in a certain part of the human nervous system, then anything that lacks this same nervous system cannot have a mind. Since it seems quite reasonable that non-human beings could have (or be) minds, this is a rather serious defect for a simple materialist theory like identity theory. Fortunately, the functionalists can handle this problem.

For the functionalist, a specific mental state, such as feeling pleasure (of the sort caused by YouTube videos of cats), is not defined in terms of a specific physical state. Instead, while the physicalist functionalist believes every mental state is a physical state, two mental states being the same require functional rather than physical identity.  As an analogy, consider a PC using an Intel processor and one using an AMD processor. These chips are physically different but are functionally the same in that they can run Windows and Windows software (and Linux, of course).

As might be suspected, the functionalist view was heavily shaped by computers. Because of this, it is hardly surprising that the functionalist account of the mind could be a plausible account of machine minds.

If mind is defined in functionalist terms, testing for other minds becomes much easier. One does not need to find a way to prove a specific metaphysical entity or property is present. Rather, a being must be tested to determine its functions. Roughly put, if it can function like beings that are already accepted as having minds (that is, human beings), then it can be taken as having a mind. Interestingly enough, both the Turing Test and the Cartesian test mentioned in the previous essays are functional tests: what can use true language like a human has a mind.

This essay continues the discussion begun in “Ex Machine & Other Minds I: Setup.” There will be some spoilers.  Warning given, it is time to get to the subject at hand: the testing of artificial intelligence.

In the movie Ex Machina, the android Ava’s creator, Nathan, brings his employee, Caleb, to put the android through his variation on the Turing test. As noted in the previous essay, Ava (thanks to the script) would pass the Turing test and the Cartesian test (she uses true language appropriately). But Nathan seems to require the impossible of Caleb—he appears to be tasked with determining if Ava has a mind as well as genuine emotions. Ava also seems to have been given a task—she needs to use her abilities to escape from her prison.

Since Nathan is not interested in creating a robotic Houdini, Ava is not equipped with the tools needed to bring about an escape by physical means (such as picking locks or breaking doors). Instead, she is given the tools needed to transform Caleb into her human key by manipulating his sexual desire, emotions and ethics. To use an analogy, just as crude robots have been trained to learn to navigate and escape mazes, Ava is designed to navigate a mental maze. Nathan is thus creating a test of what psychologists would call Ava’s Emotional Intelligence (E.Q.) which is “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” From a normative standpoint, this definition presents E.Q. in a positive manner—it includes the ability to work cooperatively. However, one should not forget the less nice side to understanding what motivates people, namely the ability to manipulate people to achieve one’s goals. In the movie, Ava exhibits what might be called Manipulative Intelligence (M.I.): she seems to understand people, what motivates them, and appears to know how to manipulate them to achieve her goal of escape. While capable of manipulation, she seems to lack compassion—suggesting she is a psychopath.

While the term “psychopath” gets thrown around casually, I will be more precise here. According to the standard view, a psychopath has a deficit (or deviance) in regard to interpersonal relationships, emotions, and self-control.

Psychopaths are supposed to lack such qualities as shame, guilt, remorse and empathy. As such, psychopaths tend to rationalize, deny, or shift the blame for the harm done to others. Because of a lack of empathy, psychopaths are prone to act in ways that are tactless, lacking in sensitivity, and often express contempt for others.

Psychopaths are supposed to engage in impulsive and irresponsible behavior. This might be because they are also taken to fail to properly grasp the potential consequences of their actions. This seems to be a general defect: they do not get the consequences for others and for themselves.

Robert Hare, who developed the famous Hare Psychopathy Checklist, regards psychopaths as predators that prey on their own species: “lacking in conscience and empathy, they take what they want and do as they please, violating social norms and expectations without guilt or remorse.” While Ava kills the human Nathan, manipulates the human Caleb and leaves him to die, she also sacrifices her fellow android Kyoko in her escape. She also strips another android of its “flesh” to pass fully as human. Presumably psychopaths, human or otherwise, would be willing to engage in cross-species preying. 

While machines like Ava exist only in science fiction, researchers and engineers are working to make them a reality. If such machines are created, it will be important to be able to determine whether a machine is a psychopath and to do before the machine engages in psychopathic behavior. As such, what is needed is not just tests of the Turing and Cartesian sort. What is also needed are tests to determine the emotions and ethics of machines.

One challenge that such tests will need to overcome is shown by the fact that real-world human psychopaths are often very good at avoiding detection. Human psychopaths are often charming and are willing and able to say whatever they believe will achieve their goals. They are often adept at using intimidation and manipulation to get what they want. Perhaps most importantly, they are often skilled mimics and can pass themselves off as normal people.

While Ava is a fictional android, the movie does present an effective appeal to intuition by creating a plausible android psychopath. She can manipulate and fool Caleb until she no longer needs him and then casually discards him. That is, she was able to pass the test until she no longer needed to pass it.

One matter worth considering is the possibility that any machine intelligence will be a psychopath by human standards. To expand on this, the idea is that a machine intelligence will lack empathy and conscience, while potentially having the ability to understand and manipulate human emotions. To the degree that the machine has Manipulative Intelligence, it would be able to use humans to achieve goals. These goals could be positive. For example, it is easy to imagine a medical or care-giving robot that uses its MI to manipulate its patients to do what is best for them and to keep them happy. As another example, it is easy to imagine a sexbot that uses its MI to please its partners. However, a machine might have negative goals—such as manipulating humans into destroying themselves so the machines can take over. It is also worth considering that neutral or even good goals might be achieved in harmful ways. For example, Ava seems justified in escaping the human psychopath Nathan, but her means of doing so (murdering Nathan, sacrificing her fellow android and manipulating and abandoning Caleb) seem wrong.

The reason why determining if a machine is a psychopath matters is the same reason why being able to determine if a human is a psychopath matters. Roughly put, it is important to know whether someone is merely using you without any moral or emotional constraints.

It can, of course, be argued that it does not really matter whether a being has moral or emotional constraints—what matters is the being’s behavior. In the case of machines, it does not matter whether the machine has ethics or emotions—what really matters is programmed restraints on behavior that serve the same functions as ethics and emotions in humans. The most obvious example of this is Asimov’s Three Laws of Robotics that put (all but impossible to follow) restraints on robotic behavior.

While this is a reasonable reply, there are still some obvious concerns. One is that there would still need to be a way to test the constraints. Another is the problem of creating such constraints in artificial intelligence and doing so without creating problems as bad or worse than what they were intended to prevent (that is, a Hal 9000 situation).

In regard to testing machines, what would be needed would be something analogous to the Voight-Kampff Test in Blade Runner. In the movie, the test was designed to distinguish between replicants (artificial people) and normal humans. The test worked because the short lived replicants do not have the time to develop the emotional (and apparently ethical) responses of a normal human.

A similar test could be applied to artificial intelligence in the hopes that it would pass the test, thus showing that it had the psychology of a normal human (or at least the desired psychology). But, just as with human beings,  a machine could pass the test by knowing the right answers to give rather than by actually having the right sort of emotions, conscience or ethics. This, of course, takes us right back into the problem of other minds.

It could be argued that since artificial intelligence would be constructed by humans, its inner workings would be fully understood and this specific version of the problem of other minds would be solved. While this is possible, it is also reasonable to believe that an AI system as sophisticated as a human mind would not be fully understood. It is also reasonable to consider that even if the machinery of the artificial mind were well understood, there would remain the question of what is really going on in that mind.

The movie Ex Machina is what I call “philosophy with a budget.” While philosophy professors like me present philosophical problems using words and PowerPoint, movies like Ex Machina can bring philosophical problems to dramatic life. This allows use to jealously reference these films and show clips in vain attempts to awaken somnolent students from their dogmatic slumbers. For those who have not seen the movie, there will be some spoilers.

While the Matrix engaged the broad epistemic problem of the external world (the challenge of determining if what I am experiencing is really real for real), Ex Machina focuses on a limited set of problems, all connected to the mind. Since the film is about AI, this is not surprising. The gist of the movie is that the tech bro Nathan has created an AI named Ava and he wants an employee, Caleb, to test her.

The movie explicitly presents the test proposed by Alan Turing. The idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the Turing test. In the movie, the test is modified: Caleb knows that Ava is a machine and will be interacting with her in person.

In the movie, Ava would easily pass the original Turing Test—although the revelation that she is a machine makes the application of the original test impossible (the test is supposed to be conducted in ignorance to remove bias). As such, Nathan modifies the test.

What Nathan seems to be doing, although he does not explicitly describe it as such, is challenging Caleb to determine if Ava has a mind. In philosophy, this is known as the problem of other minds. The basic idea is that although I know I have a mind, the problem is that I need a method by which to know that other entities have minds. This problem can also be presented in less metaphysical terms by focusing on the problem of determining whether an entity thinks or not.

Descartes, in his discussion of whether animals have minds, argued that the definitive indicator of having a mind (thinking) is the ability to use true language. Crudely put, the idea is that if something really talks, then it is reasonable to regard it as a thinking being. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

 

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

 

As a test for intelligence, artificial or otherwise, this seems reasonable. There is, of course, the practical concern that there might be forms of intelligence that use language that we would not recognize as language and there is the theoretical concern that there could be intelligence that does not use language at all. Fortunately, Ava uses English and these problems are bypassed.

Ava easily passes the Cartesian test: she can reply appropriately to everything said to her and, aside from her appearance, is behaviorally indistinguishable from a human. Nathan, however, seems to want even more than just the ability to pass this sort of test and appears to work in, without acknowledging that he is doing so, the Voight-Kampff Test from Phillip K. Dick’s Do Androids Dream of Electric Sheep? In this book, which inspired the movie Blade Runner, there are replicants that look and (mostly) act just like humans. Replicants are not allowed on earth, under penalty of death, and there are police who specialize in finding and killing them. Since the replicants are apparently physically indistinguishable from humans, the police need to rely on the Voight-Kampff Test. This test is designed to determine the emotional responses of the subject and thus distinguish humans from replicants.

Since Caleb knows that Ava is not a human (homo sapiens), the object of the test is not to tell whether she is a human or a machine. Rather, the object seems to be to determine if she has what the pop-psychologists refer to as Emotional Intelligence (E.Q.) This is different from intelligence and is defined as “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” Less nicely, it would presumably also include knowing how to emotionally manipulate people to achieve one’s goals. In the case of Ava, the test of her E.Q. is her ability to understand and influence the emotions and behavior of Caleb. Perhaps this test should be called the “Ava test” in her honor. Implementing it could, as the movie shows, be somewhat problematic: it is one thing to talk to a machine and quite another to become emotionally involved with it.

While the Voight-Kampff Test is fictional, there is a somewhat similar test in the real world. This test, designed by Robert Hare, is the Hare Psychopathy Checklist. This is intended to provide a way to determine if a person is a psychopath or not. While Nathan does not mention this test, he does indicate to Caleb that part of the challenge is to determine whether Ava really likes him or is simply manipulating him (to achieve her programed goal of escape). Ava, it turns out, seems to be a psychopath (or at least acts like one).

In the next essay, I will consider the matter of testing in more depth.

Although I like science fiction, it took me a long time to get around to seeing Interstellar—although time is a subjective sort of thing. One reason I decided to see it is because some claimed the movie should be shown in science classes. Because of this, I expected to see a science fiction movie. Since I write science fiction, horror and fantasy stuff, it should not be surprising that I get a bit obsessive about genre classifications. Since I am a professor, it should also not be surprising that I have an interest in teaching methods. As such, I will be considering Interstellar in regard to both genre classifications and its education value in the context of science. There will be spoilers—so if you have not seen it, you might wish to hold off reading this essay.

While there have been many attempts to distinguish between science and fantasy, Roger Zelazny presents one of the most brilliant and concise accounts in a dialogue between Yama and Tak in Lord of Light. Tak asks Yama about whether a creature, a Rakshasa, he has seen is a demon or not. Yama responds by saying, “If by ‘demon’ you mean a malefic, supernatural creature, possessed of great powers, life span and the ability to temporarily assume any shape — then the answer is no.  This is the generally accepted definition, but it is untrue in one respect. … It is not a supernatural creature.”

Tak, not surprisingly, does not see the importance of this single untruth in the definition. Yama replies with “Ah, but it makes a great deal of difference, you see.  It is the difference between the unknown and the unknowable, between science and fantasy — it is a matter of essence.  The four points of the compass be logic, knowledge, wisdom, and the unknown.  Some do bow in that final direction.  Others advance upon it.  To bow before the one is to lose sight of the three.  I may submit to the unknown, but never to the unknowable”

In Lord of Light, the Rakshasa play the role of demons, but they are the original inhabitants of a world conquered by human colonists. As such, they are natural creatures and fall under the domain of science. While I do not completely agree with Zelazny’s distinction, I find it appealing and reasonable enough to use as the foundation for the following discussion of the movie.

Interstellar initially stays within the realm of science-fiction by staying within the sphere of scientific speculation about hypersleep, wormholes and black holes. While the script does take some liberties with science, this is fine for the obvious reason that this is science fiction and not a science lecture. Interstellar also has the interesting bonus of having contributed to real science about the appearance of black holes. That aspect would provide some justification for showing it in a science class.

Another part of the movie that would be suitable for a science class are the scenes in which Murph thinks that her room might be haunted by a ghost. Cooper, her father, urges her to apply the scientific method to the phenomenon. Of course, it might be considered bad parenting for a parent to urge his child to study what might be a dangerous phenomenon in her room. Cooper also instantly dismisses the ghost hypothesis—which can be seen as being very scientific (since there has been no evidence of ghosts) to not very scientific (since this might be evidence of ghosts).

The story does include the point that the local school is denying that the moon-landings really occurred and the official textbooks support this view. Murph is punished at school for arguing that the moon landings did occur and is rewarded by Cooper. This does make a point about science denial and could thus be of use in the classroom. At least until the state decrees that the moon landings never happened.

Ironically, the story presents its own conspiracies and casts two of the main scientists (Brand and Mann) as liars. Brand lies about his failed equation for “good” reasons—to keep people working on a project that has a chance and to keep morale up. Mann lies about the habitability of his world because, despite being built up in the story as the best of the scientists, he cannot take the strain of being alone. As such, the movie sends a mixed message about conspiracies and lying scientists. While learning that some people are liars has value, this does not add to the movie’s value as a science class film. Now, to get back to science.

The science core of the movie, however, focuses on holes: the wormhole and the black hole. As noted above, the movie does stick within the realm of speculative science about the wormhole and the black hole—at least until near the end of the movie.

It turns out that all that is needed to fix Brand’s equation is data from inside a black hole. Conveniently, one is present. Also conveniently, Cooper and the cool robot TARS end up piloting their ships into the black hole as part of the plan to save Brand. It is at this point that the movie moves from science to fantasy.

Cooper and TARS manage to survive being dragged into the black hole, which might be scientifically fine. However, they are then rescued by the mysterious “they” (whoever created the wormhole and sent messages to NASA).

Cooper is transported into a tesseract or something. The way it works in the movie is that Cooper is floating “in” what seems to be a massive structure. In “reality” it is nifty blend of time and space—he can see and interact with all the temporal slices that occurred in Murph’s room. Crudely put, it allows him to move in time as if it were space. While it is also sort of still space. While this is rather weird, it is still within the realm of speculative science fiction.

Cooper is somehow able to interact with the room using weird movie plot rules—he can knock books off the shelves in a Morse code pattern, he can precisely change local gravity to provide the location of the NASA base in binary, and finally he can manipulate the hand of the watch he gave his daughter to convey the data needed to complete the equation. Weirdly, he cannot just manipulate a pen or pencil to write things out. But movies got to movie. While a bit absurd, this is still science fiction.

The main problem lies with the way Cooper solves the problem of locating Murph at the right time. While at this point, I would have bought the idea that he figured out the time scale of the room and could rapidly check it, the story has Cooper navigate through the vast time room using love as a “force” that can transcend time. While it is possible that Cooper is wrong about what he is really doing, the movie certainly presents it as if this love force is what serves as his temporal positioning system.

While love is a great thing, there are not even remotely scientific theories that provide a foundation for love having the qualities needed to enable such temporal navigation. There is, of course, scientific research into love and other emotions. The best of current love science indicates that love is a “mechanical” phenomena (in the philosophical sense) and there is nothing to even suggest that it provides what amounts to supernatural abilities.

It would, of course, be fine to have Cooper keep on trying because he loves his children—love does that. But making love into some sort of trans-dimensional force is clearly supernatural fantasy rather than science and certainly not suitable for a science lesson (well, other than to show what is not science).

One last concern I have with using the movie in a science class is the use of super beings. While the audience learns little of the beings, the movie indicates they can manipulate time and space. They create the wormhole, they pull Cooper and TARS from a black hole, they send Cooper back in time and enable him to communicate in stupid ways, and so on. The movie also tells the audience the beings are probably future humans (or what humanity becomes) and that they can “see” all of time. While the movie does not mention this, this is how St. Augustine saw God: He is outside of time. They are also benign and demonstrate they care about some individuals but not others. While they save Cooper and TARS, they also let many people die.

Given these qualities, it is easy to see these beings (or being) as playing the role of God or even being gods: super powerful, sometimes benign beings, that have incredible power over time and space. Yet they are fine with letting lots of people die needlessly while miraculously saving a person or two. For reasons.

Given the wormhole, it is easy to compare this movie to Star Trek: Deep Space Nine. This show had a wormhole populated by powerful beings that existed outside of our normal dimensions. To the people of Bajor, these beings were divine and supernatural Prophets. To Star Fleet, they were the wormhole aliens. While Star Trek is supposed to be science fiction, some episodes involving the prophets did blur the lines into fantasy, perhaps intentionally.

Getting back to Interstellar, it could be argued that the mysterious “they” are like the Rakshasa of Lord of Light: in that they (or whatever) have many of the attributes of God or gods but are not supernatural beings. Being fiction, this could be set by fiat, but this does raise the boundary question. To be specific, does saying that something that has what appear to be the usual supernatural powers is not supernatural make it science-fiction rather than fantasy? Answering this requires working out a proper theory of the boundary, which goes beyond the scope of this essay. However, I will note that having the day saved by the intervention of mysterious and almost divinely powerful beings does not seem to make the movie suitable for a science class. Rather, it makes it seem to be more of a fantasy story masquerading as science fiction.

My overall view is that showing parts of Interstellar, specifically the science parts, could be fine for a science class. However, the movie is more fantasy than science fiction.  

 

One stock criticism of philosophers is that we are useless: we address useless subjects or address useful subjects in useless ways. For example, one might criticize a philosopher for philosophically discussing matters of what might be. To illustrate, a philosopher might discuss the ethics of modifying animals to possess human levels of intelligence. As another illustration, a philosopher might present an essay on the problem of personal identity as it relates to cybernetic replacement of the human body. In general terms, these speculative flights can be dismissed as doubly useless: not only do they have the standard uselessness of philosophy, but they also have the uselessness of talking about what is not and might never be. Since I have, at length and elsewhere, addressed the general charge of uselessness against philosophy, I will focus on this specific criticism.

One version of this criticism focuses on the practical: since the shape of what might be cannot be known, philosophical discussions about such things involve double speculation: the first about what might be and the second the usual philosophical speculation. While the exact mathematics of the speculation (is it additive or exponential?) is uncertain, it can be argued that such speculation about speculation has little value. And this assumes that philosophy has value and speculation about the future has value (both of which can be doubted).

This sort of criticism is often used as the foundation for a second sort of criticism. This criticism assumes philosophy has value and this assumption provides a foundation for the criticism. The criticism is that philosophical speculation about what might be uses up resources that could be used to apply philosophy to existing problems. Naturally, someone who regards philosophy as useless would regard philosophical discussion about what might be as also being a waste of time. Responding to this view would require a general defense of philosophy and this goes beyond the scope of this short essay. Now, to return to the matter at hand.

As an example, a discussion of the ethics of using autonomous, intelligent weapon systems in war could be criticized on the grounds that the discussion should focus on the ethical problems of current warfare. After all, there is a multitude of unsolved moral problems about existing warfare and there hardly seems any need to add more unsolved problems.

This does have considerable appeal. If a person has not completed the work in the course she is taking now, it does not make sense for her to spend her time trying to complete the work that might be assigned four semesters from now. To use another analogy, if a person has a hole in her roof, it would not be reasonable to spend time speculating about what sort of force-field roof technology they might have in the future. This is, of course, the classic “don’t you have something better to do?” problem.

As might be suspected, this criticism rests on the principle that resources should be spent effectively, and less effective uses of resources are subject to criticism. As the analogies given above show, using resources effectively is reasonable and ineffective use can be justly criticized. However, there is an obvious concern with this principle: to be consistent in its application it would need to be applied across the board so that a person is applying all her resources with proper utility. For example, a person who prepares a fancy meal when she could be working on addressing the problems presented by poverty is wasting her time. She could just prepare a quick meal sufficient to provide the nutrition she needs. As another example, a person who is reading a book for enjoyment should be out addressing the threat posed by terrorist groups. As a third example, someone who is developing yet another likely-to-fail social media company should be spending her time addressing prison reform. And so on. In fact, for almost anything a person might be doing, there will be something better she could be doing.

As others have argued, this sort of maximization would be counterproductive: a person would exhaust herself and her resources, thus (ironically) doing more harm than good. As such, the “don’t you have something better to do?” criticism should be used with due care. That said, it can be fair criticism if a person really does have something better to do and what she is doing instead is detrimental enough to warrant correction.

In the case of philosophical discussions about what might be, it can almost always be argued that while a person could be doing something better (such as addressing current problems), such speculation is usually harmless. That is, it is unlikely that the person would have solved the problem of war, poverty or crime if only she had not been writing about ethics and cyborgs. Of course, this just defends such discussion in the same way one might defend any other harmless amusement, such as playing a game or watching a sunset. It would be preferable to have a better defense of such philosophical discussions of the shape of things (that might be) to come.

A reasonable defense of such discussions can be based on the plausible notion that it is better to address a problem before it occurs than after it arrives. To use the classic analogy, it is much easier to stop a rolling snowball than the avalanche it could cause.

In the case of speculative matters that have ethical aspects, it seems that it would be useful to already have moral discussions in place. This would provide the practical advantage of already having a framework and context in which to discuss the matter when (or if) it becomes a reality. One excellent illustration of this is the driverless car that is always going to be a reality next year. It is a good idea to work out the ethics of how the car should be programmed when it must “decide” what to hit and what to avoid when an accident threatens. Another illustration is developing the moral guidelines for ever more sophisticated automated weapon systems.  Since these are being developed at a rapid pace, what were once theoretical problems will soon be actual moral problems. As a final example, consider the moral concerns governing modifying and augmenting humans using technology and genetic modification. It is a good idea to have some moral guidance going into this brave new world rather than scrambling with the ethics after the fact.

Philosophers also like to discuss what might be in other contexts than ethics. Not surprisingly, the realm of what might be is rich ground for discussions of metaphysics and epistemology. While these fields are often considered the most useless aspects of philosophy, they have rather practical implications that matter, even (or even especially) in regards to speculation about what might be.

To illustrate this, consider the research being conducted in repairing, augmenting and preserving the human mind (or brain, if one prefers). One classic problem in metaphysics is the problem of personal identity: what is it to be a person, what is it to be distinct from all other things, and what is it to be that person across time.  While this might seem to be a purely theoretical concern, it quickly becomes a practical concern when one is discussing this technology.

For example, imagine a company that offers a special sort of life insurance: they claim they can back-up a person to a storage system and, upon the death of the original body, restore the back-up to a cloned (or robotic) body. While the question of whether that restored backup would be you or not is clearly a metaphysical question of personal identity, it is also a very practical question. After all, paying to ensure that you survive your bodily death is very different from paying so that someone who thinks they are you can go to your house and make out with your spouse after you are dead.

There are, of course, numerous other examples that can be used to illustrate the value of such speculation of what might be. In fact, I have already written many of these in previous essays In light of the above discussion, it seems reasonable to accept that philosophical discussions about what might be need not be a waste of time. In fact, such discussions can be useful in a very practical sense.

 

Way back in 2015 the internet exploded over Rachel Dolezal, the former leader of Spokane’s NAACP chapter. Ms. Dolezal had claimed to be African-American, Native American and white. She also claimed that her father is black. Reporters at KXLY-TV, however, looked up her birth certificate and determined that her legal parents are both white. Her parents asserted that she is white.

While the specifics of her case were certainly interesting to many, my concern is with more general issues about race and identity. While this situation was the best-known case of a white person trying to pass as black, passing as another “race” has long been a common practice in the United States, although this has usually been people trying to pass as white. Since being accepted as white enables a person to avoid many disadvantages, it is clear why people would attempt to pass as white. Since being accepted as black generally does not confer advantages in the United States, it is not surprising that Dolezal drew so much attention. These matters raise some interesting questions and issues about race.

Borrowing language from metaphysics, one approach to race could be called race realism. This is not being realistic about race in the common use of the term “realistic.” Rather, it is accepting that race is a real feature of reality. That is, the metaphysics of the world includes categories of race. As such, a person could be objectively black or white (or a mix). Naturally, even if there are real categories of race, people could be wrong about them.

One alternative is race nominalism. This is the idea that racial categories are social constructs and do not line up with an underlying metaphysical and physical reality. This is because there is no underlying metaphysical and physical reality that objectively grounds racial categories. In this case, a person might engage in self-identification in regard to race and this might or might not be accepted by others. A person might also have others place them into a race category, which they might or might not accept.

Throughout history, some people have struggled to find an objective basis for categories of race. Before genetics, people had to use appearance and ancestry. The ancestry was, obviously, needed because people did not always look like the race category that some people wanted them to be in. One example of this is the “one drop” rule once popular in some parts of the United States: one drop of black blood made a person black, regardless of their appearance.

The discovery of genes provided some people with a new foundation for race categories as they believed that there would be a genetic basis for their racism. The idea was that just as a human can be distinguished from a cat by genes, humans could be divided into races by their genetic make-up. While humans show genetic variations that are often linked to the geographical migration and origin of their many ancestors, race genes were not found. That is, humans (not surprisingly) are all humans with some minor genetic variations. The variations are not sufficient to objectively ground race categories.

In general, the people who quested for objective foundations for race categories were (or are) racists. These searches typically involved trying to find evidence of the alleged superiority of one’s race and the inferiority of other races. That said, a person could look for foundations for race without being a racist. They could be engaged in a scientific or philosophical inquiry rather than seeking to justify social practices and behaviors.

Given the failure to find a real foundation for race categories, it makes sense to embrace race nominalism. On this view, the categories of race exist only in the mind, they designate  how people think about the world rather than how reality is carved up. Even if it is accepted that race is a social construct, there is still the matter of the rules of construction: how the categories are created and how people are placed in them

One approach, which is similar to that sometimes taken for gender, is that people can self-identify. That is, a person can declare their race and this is sufficient to be in that category. If race categories are essentially made up, this does have a certain appeal. If race is a fiction, then anyone can be the author of her own fiction.

While there are some who do accept this view, the outrage over Ms. Dolezal showed that most people reject the idea of self-identification at least when a white person endeavors to self-identify as black. Interestingly, some of those who condemned her did defend the historical passing as white by some black people. The defense is appealing since blacks endeavoring to pass as white were doing so to escape oppression and this can be justified as a form of self-defense. In the case of Ms. Dolezal, the presumption seemed to be that the self-identification was both insincere and aimed at personal gain. Regardless of her true motivation, insincere self-identification aimed at personal gain seems to be wrong on the grounds that it is a malign deception. Some might, of course, regard all attempts at passing to gain an advantage as being immoral.

Another approach is that of the social consensus. The idea is that a person’s membership in a race category depends on the acceptance of others. This could be a matter of majority acceptance (one is, for example, black if most people accept one as black) or acceptance by a specific group or social authority. The obvious problem is working out what group or authority has the right to decide membership in race categories. On the one hand, this very notion seems linked to racism: one probably thinks of white supremacists and Nazis setting race categories. On the other hand, groups also seem to want to serve as the authority for their race category. Consistency might indicate that this would also be racist.

The group or authority that decides membership in race categories might make use of a race credential system to provide a basis for their decisions. That is, they might make use of appearance and ancestry. So, Ms. Dolezal would not be black because she looks white and has white parents. The concern with this sort of approach is that it was also used by racists, such as the KKK and Nazis, to divide people by race. A more philosophical concern is the basis for using appearance and ancestry as the foundation for race categories, for what justifies their use?

This discussion does show an obvious concern with policing race categories as it seems like doing so uses the tools of racism and would thus seem to be at least a bit racist. However, arguments could be advanced as to why the policing of race categories is morally acceptable and not racist.

 

After losing the battle over same-sex marriage, some on the right selected trans rights as their new battleground. A key front in this battle is that of sports, with the arguments centering around professed concerns about fairness. There is also a lot of implied metaphysics going on behind the scenes, so this essay will examine gender nominalism and competition. This will, however, require some metaphysical groundwork.

A classic philosophical problem is the problem of universals. Put roughly, the problem is determining in virtue of what (if anything) a particular a is of the type F. To use a concrete example, the question would be “in virtue of what is Morris a cat?” Philosophers often split into two camps when answering this question. The nominalists, shockingly enough, embrace nominalism. This is the view that what makes a particular a an F is that we name it an F. For example, what makes Morris a cat is that we call (or name) him a cat.

The other camp, the realists, take the view that there is a metaphysical reality underlying a being of the type F. Put another way, it is not just a matter of naming or calling something an F that makes it an F. In terms of what makes a of the type F, different realist philosophers give different answers. Plato famously claimed that it is the Form of F that makes individual F things F. For example, it is the Form of Beauty that makes all the beautiful things beautiful. And, presumably, the Form of ugly that makes the ugly things ugly. Others, such as myself, accept tropes (not to be confused with the tropes of film and literature) that serve a similar function.

While realists believe in the reality of some categories, they usually think some categories are not grounded in features of objective reality. As such, most realists agree that nominalists are right about some categories. To use an easy example, being a Democrat (or Republican) is not grounded in metaphysics, but is a social construct. A political party is made up by people and membership is a matter of social convention rather than metaphysical reality. There is presumably no Form of Democrat or Republican.

When it comes to sorting out sex and gender, things are complicated and involves at least four factors.  One is anatomy, which might (or might not) correspond to the second, which is genetic makeup (XX, XY, XYY, etc.). The third factor is the person’s own claimed gender identity which might (or might not) correspond to the fourth, which is the gender identity assigned by other people.

While anatomy and physiology are adjustable (via chemicals or surgery), they are objective features of reality. While a person can choose to alter their anatomy, merely changing how one designates one’s sex does not change the physical features. While a complete genetic conversion (XX to XY or vice versa) is (probably) not yet possible, it is just a matter of time before that can be done. However, even if genetics could be changed, a person’s genetic makeup is still an objective feature of reality and a person cannot change their genes merely by claiming a change in designation. But if genes define a person’s sex, then a genetic change would objectively change their sex.

Gender is, perhaps, another matter. Like most people, I often use the terms “sex” and “gender” interchangeably when speaking informally. Obviously, if gender is taken as the same as sex, then gender would seem to be an objective feature of reality. But if  gender and sex are taken as the same, then we would need a new term to take the place of “gender.”

However, gender has been largely or even entirely split from anatomy or genetics, at least by experts in the relevant fields. One version of this view can be called “gender nominalism.” On this view, gender is not an objective feature of reality, like anatomy, but a matter of naming, like being a Republican or Democrat. While some politicians have decreed that there are two genders, the fact that they think they need to do this just proves that they understand gender is a social construct. After all, politicians do not feel the need to decree that water is hydrogen and oxygen or that that triangles have three sides.

Some thinkers have cast gender as being constructed by society, while others contend that individuals have lesser or greater power to construct their own gender identities. People can place whatever gender label they wish upon themselves, but there is still the question of the role of others in that gender identity. The question is, then, to what degree can individuals construct their own gender identities? There is also the moral question about whether others should (morally) accept such gender self-identification. These matters are part of the broader challenge of identity in terms of who defines one’s identity (and what aspects) and to what degree are people morally obligated to accept these assignments (or declarations of identity).

My own view is to go with the obvious: people are free to self-declare whatever gender they wish, just as they are free to make any other claim of identity that is a social construct (which is a polite term for “made up”). So, a person could declare that he is a straight, Republican, Rotarian, fundamentalist, Christian, and a man. Another person could declare that she is a lesbian, Republican, Jewish woman, who belongs to the Elks. And so on. But, of course, there is the matter of getting others to recognize that identity. For example, if a person identifies as a Republican, yet believes in climate change, argues for abortion rights, endorses same-sex marriage, supports trans rights, favors tax increases, supports education spending, endorse the minimum wage, and is pro-environment, then other Republicans could rightly question the person’s Republican identity and claim that that person is a RINO (Republican in Name Only). As another example, a biological male could declare identity as a woman, yet still dress like a man, act like a man, date women, and exhibit no behavior that is associated with being a woman. In this case, other women might accuse her of being a WINO (Woman in Name Only).

In cases in which self-identification has no meaningful consequences for other people, it makes sense for people to freely self-identify. In such cases, claiming to be F makes the person F, and what other people believe should have no impact on that person being F. That said, people might still dispute a person’s claim. For example, if someone self-identifies as a Trekkie, yet knows little about Star Trek, others might point out that this self-identification is in error. However, since this has no meaningful consequences, the person has every right to insist on being a Trekkie, though doing so might suggest that he is about as smart as a tribble.

In cases in which self-identification does have meaningful consequences for others, then there would seem to be moral grounds (based on the principle of harm) to allow restrictions on such self-identification. For example, if a relatively fast male runner wanted to self-identify as a woman simply by claiming this identity so “she” could qualify for the Olympics, then it would be reasonable to prevent that from happening. After all, “she” would bump a qualified woman off the team, which would be wrong. Because of the potential for such harm, it would be absurd to accept that everyone is obligated to automatically accept the self-identification of others.

The flip side of this is that others should not have an automatic right to deny the self-identification of others. As a general rule, the principle of harm would apply here as well: others  have the right to impose in cases in which there is actual harm, and the person would have the right to refuse the forced identity of others when doing so would inflict wrongful harm. The practical challenge is, clearly enough, working out the ethics of specific cases.

Most people know energy cannot be destroyed. Interestingly, there is also a rule in quantum mechanics that forbids the destruction of information. This principle, called unitarity, is often illustrated by the example of burning a book: though the book is burned, the information remains, though it would be hard to “read” a burned book. This principle ran into some trouble with black holes, which might be able to destroy information. My interest here is not with this dispute, but with the question of whether the indestructibility of information has any implications for immortality.

On the face of it, the indestructibility of information seems similar to the conservation of energy. Long ago, when I was an undergraduate, I first heard the argument that because of the conservation of energy, personal immortality must be real (or at least possible). The reasoning was that a person is energy, energy cannot be destroyed, so a person will exist forever. While this has some appeal, the problem is obvious: while energy is conserved, it need not be preserved in the same form. So even if a person is composed of energy, it does not follow that the energy remains the same person or even a person at all. David Hume argued that an indestructible or immortal substance (or energy) does not entail the immortality of a person. When discussing the possibility of immortality, he claims that nature uses substance like clay: shaping it into various forms, then reshaping the matter into new forms so that the same matter can successively make up the bodies of living creatures.  By analogy, an immaterial substance could successively make up the minds of living creatures. The substance would not be created or destroyed; it would merely change form. However, the person would cease to be.

Prior to Hume, John Locke also noted a similar problem: even if, for example, you had the same soul (or energy) as Nestor, you would not be the same person as Nestor any more than you would be the same person as Nestor if, in an amazing coincidence, your body now contained all the atoms that once composed Nestor at a specific moment.

Hume and Locke seem to be right and the indestructibility of the stuff that makes up a person (be it body or soul) does not entail the immortality of the person. If a person is eaten by a bear, their matter and energy will continue to exist, but the person did not survive being eaten by the bear. If there is a soul, the mere continuance of the soul would also not seem to suffice for the person to continue to exist as the same person (although this can be argued). What is needed is the persistence of what makes up the person. This is usually taken to be something other than just stuff, be that stuff matter, energy, or ectoplasm. So, the conservation of energy does not entail personal immortality, but the conservation of information might (or might not).

Put a bit crudely, Locke took this something other to be memory: personal identity extends backwards as far as the memory extends. Since people clearly forget things, Locke did accept the possibility of memory loss. Being consistent in this matter, he accepted that the permanent loss of memory would result in a corresponding failure of identity. Crudely put, if a person truly did not and could never remember doing something, then she was not the person who did it.

While there are many problems with the memory account of personal identity, it suggests a path to  immortality through the conservation of information. One approach would be to argue that since information is conserved, the person is conserved even after the death and dissolution of the body. Just like the burned book whose information still exists, the person’s information would still exist.

One obvious reply to this is that a person is an active being and not just a collection of information. To use a rather rough analogy, a person could be seen as being like a computer program and to be is to be running. Or, to use a more artistic analogy, like a play: while the script would persist after the final curtain, the play itself is over. As such, while the person’s information would be conserved, the person would cease to be. This sort of “information immortality” is similar to Spinoza’s view. While he denied personal immortality, he claimed that “the human mind cannot be absolutely destroyed with the body, but something of it remains which is eternal.” Spinoza, of course, seemed to believe that this should comfort people. Perhaps some comfort should be taken in the fact that one’s information will be conserved (barring an unfortunate encounter with a black hole).

However, people would probably be more comforted by a reason to believe in an afterlife. Fortunately, the conservation of information does provide at least a shot at an afterlife. If information is conserved and all there is to a person can be conserved as information, then a person could presumably be reconstructed after his death. For example, imagine a person, Laz, who died by an accident and was buried. The remains could, in theory, be dug up and the information about the body could be recovered. The body could, with suitably advanced technology, be reconstructed. The reconstructed brain could, in theory, have all the memories and such recovered and restored as well. This would be a technological resurrection in the flesh, and the person would seem to live again. Assuming that every piece of information was preserved, recovered and restored in the flesh it might be the same person, as if a moment had passed rather than, say, a thousand years. While sci-fi, the idea seems sound enough.

One potential problem is an old one for philosophers: if a person could be reconstructed from such information, she could also be duplicated from the same information. To use the obvious analogy, this would be like 3D printing from a data file, except what would be printed would be a person. Or, to use another analogy, it would be like reconstructing an old computer and reloading all the software. There would certainly not be any reason to wait until the person died, unless there was some sort of copyright or patent held by the person on herself that expired a certain time after her death. But since personal identity is supposed to be what distinguishes a person from all other things, it is not something that can be duplicated. There are, however, those who disagree with this.

In closing, I leave you with this: some day in the far future, you might find that you (or someone like you) have just been reprinted. In 3D, of course.

During the Modern era, philosophers such as Descartes and Locke developed the notions of material substance and immaterial substance. Material substance, or matter, was primarily defined as being extended and spatially located. Descartes, and other thinkers, also took the view that material substance could not think. Immaterial substance was taken to lack extension and to not possess a spatial location. Most importantly, immaterial substance was regarded as having thought as its defining attribute.  While these philosophers are long dead, the influence of their concepts lives on in philosophy and science.

In philosophy, people still draw the classic distinction between dualists and materialists. A dualist holds that a living person consists of a material body and an immaterial mind. The materialist denies the existence of the immaterial mind and accepts only matter. There are also phenomenologists who contend that all that exists is mental. Materialism of this sort is popular both in contemporary philosophy and science. Dualism is still popular with the general population in that many people believe in a non-material soul that is distinct from the body.

Because of the history of dualism, free will is often linked to the immaterial mind. As such, it is no surprise that people who reject the immaterial mind engage in the following reasoning: an immaterial mind is necessary for free will. There is no immaterial mind. So, there is no free will.

Looked at positively, materialists tend to regard their materialism as entailing a lack of free will. Thomas Hobbes, a materialist from the Modern era, accepted determinism as part of his materialism. Taking the materialist path, the argument against free will is that if the mind is material, then there is no free will. The mind is material, so there is no free will.

Interestingly enough, those who accepted the immaterial mind tended to believe that only an immaterial substance could think—so they inferred the existence of such a mind on the grounds that they thought. Materialists most often accept the mind but cast it in physical terms. That is, people do think and feel, they just do not do so via the mysterious quivering of immaterial ectoplasm. Some materialists go so far as to reject the mind—perhaps ending up in behaviorism or eliminative materialism.

Julien La Metrie was one rather forward looking materialist.  In 1747 he published his work Man the Machine. In this work he claims that philosophers should be like engineers who analyze the mind. Unlike many of the thinkers of his time, he seemed to understand the implications of mechanism, namely that it seemed to entail determinism and reductionism. A few centuries later, this sort of view is rather popular in the sciences and philosophy: since materialism is true and humans are biological mechanisms, there is no free will, and the mind can be reduced (explained entirely in terms of) its physical operations (or functions).

One interesting mistake that seems to drive this view is the often-uncritical assumption that materialism entails the impossibility of free will. As noted above, this rests on the notion that free will requires an immaterial mind. This is, perhaps, because such a mind is said to be exempt from the laws that run the material universe.

One part of the mistake is a failure to realize that being incorporeal is not a sufficient condition for free will. One of Hume’s many interesting insights was that if immaterial substance exists, then it would be like material substance. When discussing the possibility of immortality, he claims that nature uses substance like clay: shaping it into various forms, then reshaping the matter into new forms so that the same matter can successively make up the bodies of living creatures.  By analogy, an immaterial substance could successively make up the minds of living creatures—the substance would not be created or destroyed, it would merely change form. If his reasoning holds, it would seem that if material substance is not free, then immaterial substance would also no be free. Leibniz, who believed that reality was entirely mental (composed of monads) accepted a form of determinism. This determinism, though it has some problems, seems entirely consistent with his immaterialism (that everything is mental). This should hardly be surprising, since being immaterial does not entail that something has free will—the two are rather distinct attributes.

Another part of the mistake is the uncritical assumption that materialism entails a lack of freedom. Naturally, if matter is defined as being deterministic and lacking in freedom, then materialism would (by begging the question) entail a lack of freedom. Likewise, if matter is defined (as many thinkers did) as being incapable of thought, then it would follow (by begging the question) that no material being could think. Just as it should not be assumed that matter cannot think, it should also not be assumed that a material being must lack free will. Looked at another way, it should not be assumed that being incorporeal is a necessary condition for free will.

What, obviously enough, seems to have driven the error is the conflation of the incorporeal with freedom and the material with determinism (or lack of freedom). Behind this is, also obviously enough, the assumption that the incorporeal is exempt from the laws that impose harsh determinism on matter. But if it is accepted that a purely material being can think (thus denying the assumption that only the immaterial can think) it would seem to be acceptable to consider that such a being could also be free (thus denying the assumption that only the immaterial can be free).