In philosophy skepticism is the view that we seem to lack knowledge. There are numerous varieties of skepticism which are defined by the extent of  doubt endorsed by the skeptic. A relatively mild case of skepticism might involve doubts about metaphysical claims while a truly rabid skeptic would doubt everything—including their own existence. 

While many philosophers have attempted to defeat the dragon of skepticism, all these attempts seem to have failed. This is hardly surprising—skepticism seems to be unbreakable. The reasons for this have an ancient pedigree and can be distilled down to two simple arguments.

The first goes after the possibility of justifying a belief and attacks the view that knowledge requires a belief that is true and justified. If a standard of justification is presented, then there is the question of what justifies that standard. If a justification is offered, then the same question can be raised to infinity. And beyond. If no justification is offered, then there is no reason to accept the standard.

A second stock argument for skepticism is that any reasonable argument given in support of knowledge can be countered by an equally reasonable argument against knowledge.  Some folks, such as Chisholm, claim it is fair to assume we have knowledge and begin epistemology from that point. However, this seems to have all the merit of grabbing the first-place trophy without competing.

Like all sane philosophers, I tend to follow David Hume’s view in my everyday life: my skepticism is nowhere to be seen when I am filling out my taxes, sitting in a committee meeting, or at the dentist. However, like a useless friend, it shows up when it is not needed. As such, it would be nice if skepticism could be defeated or at least rendered irrelevant.

John Locke took an interesting approach to skepticism. While, like Descartes, he seemed to want to find certainty, he settled for a practical approach. After acknowledging that our faculties cannot provide certainty, he asserted that what matters to us is the ability of our faculties to aid us in our preservation and wellbeing.

Jokingly, he challenges “the dreamer” to put his hand into a furnace—this would, he claims, wake him “to a certainty greater than he could wish.” More seriously, Locke contends that our concern is not with achieving epistemic certainty. Rather, what matters is our happiness and misery. While Locke can be accused of taking an easy out rather than engaging the skeptic in a battle of certainty or death, his approach is appealing. Since I happened to think through this essay while running with an injured back, I will use that to illustrate my view.

When I set out to run, my back began hurting immediately. While I could not be certain that I had a body containing a spine and nerves, no amount of skeptical doubt could make the pain go away—in regard to the pain, it did not matter whether I really had a back or not.  Whether I was a pained brain in a vat or a pained brain in a runner on the road it was the pain that that really mattered to me.

As I ran, it seemed that I was covering distance in a three-dimensional world. Since I live in Florida (or what seems to be Florida) I was soon feeling warm and sticky I could eventually feel my thirst and fatigue. Once more, it did not seem to really matter if this was real—whether I was really bathed in sweat or a brain bathed in some sort of nutrient fluid, the run was the same to me. As I ran, I took pains to avoid cars, trees and debris. While I did not know if they were real, I have experienced what it is like to be hit by a car  and also experienced what it is like to fall. In terms of navigating through my run, it did not matter at whether it was real or not. If I knew for sure that my run was really real for real that would not change the run. If I somehow knew it was all an illusion that I could never escape, I would still run for the sake of the experience of running.

This, of course, might seem a bit odd. After all, when the hero of a story or movie finds out that they are in a virtual reality what usually follows is disillusionment and despair. However, my attitude has been shaped by years of gaming—both tabletop (BattleTech, Dungeons & Dragons, Pathfinder, Call of Cthulhu, and so many more) and video (Zork, Doom, Starcraft, Warcraft, Destiny, Halo, and many more). When I am pretending to be a paladin, the Master Chief, or a Guardian, I know I am doing something that is not really real for real. However, the game can be pleasant and enjoyable or unpleasant and awful. This enjoyment or suffering is just as real as enjoyment or suffering caused by what is supposed to be really real for real—though I believe it is but a game.

If I somehow knew that I was trapped in an inescapable virtual reality, then I would simply keep playing the game—that is what I do. Plus, it would get boring and awful if I stopped playing. If I somehow knew that I was in the really real world for real, I would keep doing what I am doing. Since I might be trapped in just such a virtual reality or I might not, the sensible thing to do is keep playing as if it is really real for real. After all, that is the most sensible option in every case. As such, the reality or lack thereof of the world I think I occupy does not matter at all. The play, as they say, is the thing.

While the problem of other minds is an epistemic matter (how does one know that another being has a mind?) there is also the metaphysical problem of determining the nature of the mind. It is often assumed that there is one answer to the metaphysical question regarding the nature of mind. However, it is certainly reasonable to keep open the possibility that there might be minds that are metaphysically very different. One area in which this might occur is in regard to machine intelligence, an example of which is Ava in the movie Ex Machina, and organic intelligence. The minds of organic beings might differ metaphysically from those of machines—or they might not.

Over the centuries philosophers have proposed various theories of mind, and it is interesting to consider which of these theories would be compatible with machine intelligence. Not surprisingly, these theories (except for functionalism) were developed to provide accounts of the minds of biological creatures.

One classic theory of mind is identity theory.  This a materialist theory of mind in which the mind is composed of matter.  What distinguished the theory from other materialist accounts of mind is that each mental state is taken as being identical to a specific state of the central nervous system. As such, the mind is equivalent to the central nervous system and its states.

If identity theory is the only correct theory of mind, then machines could not have minds (assuming they are not cyborgs with human nervous systems). This is because such machines would lack the central nervous system of a human. There could, however, be an identity theory for machine minds—in this case the machine mind would be identical to the processing system of the machine and its states. On the positive side, identity theory provides a straightforward solution to the problem of other minds: whatever has the right sort of nervous system or machinery would have a mind. But there is a negative side. Unfortunately for classic identity theory, it has been undermined by the arguments presented by Saul Kripke and David Lewis’ classic “Mad Pain & Martian Pain.” As such, it seems reasonable to reject identity theory as an account for traditional human minds as well as machine minds.

Perhaps the best-known theory of mind is substance dualism. This view, made famous by Descartes, is that there are two basic types of entities: material entities and immaterial entities. The mind is an immaterial substance that somehow controls the material substance that composes the body. For Descartes, immaterial substance thinks and material substance is unthinking and extended.

While most people are probably not familiar with Cartesian dualism, they are familiar with its popular version—the view that a mind is a non-physical thing (often called “soul”) that drives around the physical body. While this is a popular view outside of academics, it is rejected by most scientists and philosophers on the reasonable grounds that there seems to be little evidence for such a mysterious metaphysical entity. As might be suspected, the idea that a machine mind could be an immaterial entity seems even less plausible than the idea that a human mind could be an immaterial entity.

That said, if it is possible that the human mind is an immaterial substance that is somehow connected to an organic material body, then it seems equally possible that a machine mind could be an immaterial substance somehow connected to a mechanical material body. Alternatively, they could be regarded as equally implausible and hence there is no special reason to regard a machine ghost in a mechanical shell as more unlikely than a ghost in an organic shell. As such, if human minds can be immaterial substances, then so could machines minds.

In terms of the problem of other minds, there is the serious challenge of determining whether a being has an immaterial substance driving its physical shell. As it stands, there seems to be no way to prove that such a substance is present in the shell. While it might be claimed that intelligent behavior (such as passing the Cartesian or Turing test) would show the presence of a mind, it would hardly show that there is an immaterial substance present. It would first need to be established that the mind must be an immaterial substance, and this is the only means by which a being could pass these tests. It seems rather unlikely that this will be done. The other forms of dualism discussed below also suffer from this problem.

While substance dualism is the best-known form of dualism, there are other types. One other type is known as property dualism. This view does not take the mind and body to be substances. Instead, the mind is supposed to be made up of mental properties that are not identical with physical properties. For example, the property of being happy about getting a puppy could not be reduced to a particular physical property of the nervous system. Thus, the mind and body are distinct but are not different ontological substances.

Coincidentally enough, there are two main types of property dualism: epiphenomenalism and interactionism. Epiphenomenalism is the view that the relation between the mental and physical properties is one way: mental properties are caused by, but do not cause, the physical properties of the body. As such, the mind is a by-product of the physical processes of the body. The analogy I usually use to illustrate this is that of a sparkler (the lamest of fireworks): the body is like the sparkler and the sparks flying off it are like the mental properties. The sparkler causes the sparks, but the sparks do not cause the sparkler.

This view was, apparently, created to address the mind-body problem: how can the non-material mind interact with the material body? While epiphenomenalism cuts the problem in half, it still fails to solve the problem—one way causation between the material and the immaterial is fundamentally as mysterious as two-way causation. It also seems to have the defect of making mental properties unnecessary and Ockham’s razor would seem to require going with the simpler view of a physical account of the mind.

As with substance dualism, it might seem odd to imagine an epiphenomenal mind for a machine. However, it seems no more or less weird than accepting such a mind for a human being. As such, this does seem to be a possibility for a machine mind. Not a very good one, but still a possibility.

A second type of property dualism is interactionism. As the name indicates, this is the theory that mental properties can bring about changes in the physical properties of the body and vice versa. That is, interaction road is a two-way street. Like all forms of dualism, this runs into the mind-body problem. But, unlike substance dualism its does not require the much loathed metaphysical category of substance—it just requires accepting metaphysical properties. Unlike epiphenomenalism it avoids the problem of positing explicitly useless properties—although it can be argued that the distinct mental properties are not needed. This is exactly what materialists argue.

As with epiphenomenalism, it might seem odd to attribute to a machine a set of non-physical mental properties. But, as with the other forms of dualism, it is really no stranger than attributing the same to organic beings. This is, obviously, not an argument in its favor, the assertion that the view should not be dismissed from mere organic prejudice.

The final theory I will consider is the very popular functionalism. As the name suggests, this view asserts that mental states are defined in functional terms. So, a functional definition of a mental state defines the mental state in regard to its role or function in a mental system of inputs and outputs. More specifically, a mental state, such as feeling pleasure, is defined in terms of the causal relations that it holds to external influences on the body (such as a cat video on YouTube), other mental states, and the behavior of the rest of the body. 

While it need not be a materialist view (ghosts could have functional states), functionalism is most often presented as a materialist view of the mind in which the mental states take place in physical systems. While the identity theory and functionalism are both materialist theories, they have a critical difference. For identity theorists, a specific mental state, such as pleasure, is identical to a specific physical state, such the state of neurons in a very specific part of the brain. So, for two mental states to be the same, the physical states must be identical. Thus, if mental states are specific states in a certain part of the human nervous system, then anything that lacks this same nervous system cannot have a mind. Since it seems quite reasonable that non-human beings could have (or be) minds, this is a rather serious defect for a simple materialist theory like identity theory. Fortunately, the functionalists can handle this problem.

For the functionalist, a specific mental state, such as feeling pleasure (of the sort caused by YouTube videos of cats), is not defined in terms of a specific physical state. Instead, while the physicalist functionalist believes every mental state is a physical state, two mental states being the same require functional rather than physical identity.  As an analogy, consider a PC using an Intel processor and one using an AMD processor. These chips are physically different but are functionally the same in that they can run Windows and Windows software (and Linux, of course).

As might be suspected, the functionalist view was heavily shaped by computers. Because of this, it is hardly surprising that the functionalist account of the mind could be a plausible account of machine minds.

If mind is defined in functionalist terms, testing for other minds becomes much easier. One does not need to find a way to prove a specific metaphysical entity or property is present. Rather, a being must be tested to determine its functions. Roughly put, if it can function like beings that are already accepted as having minds (that is, human beings), then it can be taken as having a mind. Interestingly enough, both the Turing Test and the Cartesian test mentioned in the previous essays are functional tests: what can use true language like a human has a mind.

This essay continues the discussion begun in “Ex Machine & Other Minds I: Setup.” There will be some spoilers.  Warning given, it is time to get to the subject at hand: the testing of artificial intelligence.

In the movie Ex Machina, the android Ava’s creator, Nathan, brings his employee, Caleb, to put the android through his variation on the Turing test. As noted in the previous essay, Ava (thanks to the script) would pass the Turing test and the Cartesian test (she uses true language appropriately). But Nathan seems to require the impossible of Caleb—he appears to be tasked with determining if Ava has a mind as well as genuine emotions. Ava also seems to have been given a task—she needs to use her abilities to escape from her prison.

Since Nathan is not interested in creating a robotic Houdini, Ava is not equipped with the tools needed to bring about an escape by physical means (such as picking locks or breaking doors). Instead, she is given the tools needed to transform Caleb into her human key by manipulating his sexual desire, emotions and ethics. To use an analogy, just as crude robots have been trained to learn to navigate and escape mazes, Ava is designed to navigate a mental maze. Nathan is thus creating a test of what psychologists would call Ava’s Emotional Intelligence (E.Q.) which is “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” From a normative standpoint, this definition presents E.Q. in a positive manner—it includes the ability to work cooperatively. However, one should not forget the less nice side to understanding what motivates people, namely the ability to manipulate people to achieve one’s goals. In the movie, Ava exhibits what might be called Manipulative Intelligence (M.I.): she seems to understand people, what motivates them, and appears to know how to manipulate them to achieve her goal of escape. While capable of manipulation, she seems to lack compassion—suggesting she is a psychopath.

While the term “psychopath” gets thrown around casually, I will be more precise here. According to the standard view, a psychopath has a deficit (or deviance) in regard to interpersonal relationships, emotions, and self-control.

Psychopaths are supposed to lack such qualities as shame, guilt, remorse and empathy. As such, psychopaths tend to rationalize, deny, or shift the blame for the harm done to others. Because of a lack of empathy, psychopaths are prone to act in ways that are tactless, lacking in sensitivity, and often express contempt for others.

Psychopaths are supposed to engage in impulsive and irresponsible behavior. This might be because they are also taken to fail to properly grasp the potential consequences of their actions. This seems to be a general defect: they do not get the consequences for others and for themselves.

Robert Hare, who developed the famous Hare Psychopathy Checklist, regards psychopaths as predators that prey on their own species: “lacking in conscience and empathy, they take what they want and do as they please, violating social norms and expectations without guilt or remorse.” While Ava kills the human Nathan, manipulates the human Caleb and leaves him to die, she also sacrifices her fellow android Kyoko in her escape. She also strips another android of its “flesh” to pass fully as human. Presumably psychopaths, human or otherwise, would be willing to engage in cross-species preying. 

While machines like Ava exist only in science fiction, researchers and engineers are working to make them a reality. If such machines are created, it will be important to be able to determine whether a machine is a psychopath and to do before the machine engages in psychopathic behavior. As such, what is needed is not just tests of the Turing and Cartesian sort. What is also needed are tests to determine the emotions and ethics of machines.

One challenge that such tests will need to overcome is shown by the fact that real-world human psychopaths are often very good at avoiding detection. Human psychopaths are often charming and are willing and able to say whatever they believe will achieve their goals. They are often adept at using intimidation and manipulation to get what they want. Perhaps most importantly, they are often skilled mimics and can pass themselves off as normal people.

While Ava is a fictional android, the movie does present an effective appeal to intuition by creating a plausible android psychopath. She can manipulate and fool Caleb until she no longer needs him and then casually discards him. That is, she was able to pass the test until she no longer needed to pass it.

One matter worth considering is the possibility that any machine intelligence will be a psychopath by human standards. To expand on this, the idea is that a machine intelligence will lack empathy and conscience, while potentially having the ability to understand and manipulate human emotions. To the degree that the machine has Manipulative Intelligence, it would be able to use humans to achieve goals. These goals could be positive. For example, it is easy to imagine a medical or care-giving robot that uses its MI to manipulate its patients to do what is best for them and to keep them happy. As another example, it is easy to imagine a sexbot that uses its MI to please its partners. However, a machine might have negative goals—such as manipulating humans into destroying themselves so the machines can take over. It is also worth considering that neutral or even good goals might be achieved in harmful ways. For example, Ava seems justified in escaping the human psychopath Nathan, but her means of doing so (murdering Nathan, sacrificing her fellow android and manipulating and abandoning Caleb) seem wrong.

The reason why determining if a machine is a psychopath matters is the same reason why being able to determine if a human is a psychopath matters. Roughly put, it is important to know whether someone is merely using you without any moral or emotional constraints.

It can, of course, be argued that it does not really matter whether a being has moral or emotional constraints—what matters is the being’s behavior. In the case of machines, it does not matter whether the machine has ethics or emotions—what really matters is programmed restraints on behavior that serve the same functions as ethics and emotions in humans. The most obvious example of this is Asimov’s Three Laws of Robotics that put (all but impossible to follow) restraints on robotic behavior.

While this is a reasonable reply, there are still some obvious concerns. One is that there would still need to be a way to test the constraints. Another is the problem of creating such constraints in artificial intelligence and doing so without creating problems as bad or worse than what they were intended to prevent (that is, a Hal 9000 situation).

In regard to testing machines, what would be needed would be something analogous to the Voight-Kampff Test in Blade Runner. In the movie, the test was designed to distinguish between replicants (artificial people) and normal humans. The test worked because the short lived replicants do not have the time to develop the emotional (and apparently ethical) responses of a normal human.

A similar test could be applied to artificial intelligence in the hopes that it would pass the test, thus showing that it had the psychology of a normal human (or at least the desired psychology). But, just as with human beings,  a machine could pass the test by knowing the right answers to give rather than by actually having the right sort of emotions, conscience or ethics. This, of course, takes us right back into the problem of other minds.

It could be argued that since artificial intelligence would be constructed by humans, its inner workings would be fully understood and this specific version of the problem of other minds would be solved. While this is possible, it is also reasonable to believe that an AI system as sophisticated as a human mind would not be fully understood. It is also reasonable to consider that even if the machinery of the artificial mind were well understood, there would remain the question of what is really going on in that mind.

The movie Ex Machina is what I call “philosophy with a budget.” While philosophy professors like me present philosophical problems using words and PowerPoint, movies like Ex Machina can bring philosophical problems to dramatic life. This allows use to jealously reference these films and show clips in vain attempts to awaken somnolent students from their dogmatic slumbers. For those who have not seen the movie, there will be some spoilers.

While the Matrix engaged the broad epistemic problem of the external world (the challenge of determining if what I am experiencing is really real for real), Ex Machina focuses on a limited set of problems, all connected to the mind. Since the film is about AI, this is not surprising. The gist of the movie is that the tech bro Nathan has created an AI named Ava and he wants an employee, Caleb, to test her.

The movie explicitly presents the test proposed by Alan Turing. The idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the Turing test. In the movie, the test is modified: Caleb knows that Ava is a machine and will be interacting with her in person.

In the movie, Ava would easily pass the original Turing Test—although the revelation that she is a machine makes the application of the original test impossible (the test is supposed to be conducted in ignorance to remove bias). As such, Nathan modifies the test.

What Nathan seems to be doing, although he does not explicitly describe it as such, is challenging Caleb to determine if Ava has a mind. In philosophy, this is known as the problem of other minds. The basic idea is that although I know I have a mind, the problem is that I need a method by which to know that other entities have minds. This problem can also be presented in less metaphysical terms by focusing on the problem of determining whether an entity thinks or not.

Descartes, in his discussion of whether animals have minds, argued that the definitive indicator of having a mind (thinking) is the ability to use true language. Crudely put, the idea is that if something really talks, then it is reasonable to regard it as a thinking being. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

 

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

 

As a test for intelligence, artificial or otherwise, this seems reasonable. There is, of course, the practical concern that there might be forms of intelligence that use language that we would not recognize as language and there is the theoretical concern that there could be intelligence that does not use language at all. Fortunately, Ava uses English and these problems are bypassed.

Ava easily passes the Cartesian test: she can reply appropriately to everything said to her and, aside from her appearance, is behaviorally indistinguishable from a human. Nathan, however, seems to want even more than just the ability to pass this sort of test and appears to work in, without acknowledging that he is doing so, the Voight-Kampff Test from Phillip K. Dick’s Do Androids Dream of Electric Sheep? In this book, which inspired the movie Blade Runner, there are replicants that look and (mostly) act just like humans. Replicants are not allowed on earth, under penalty of death, and there are police who specialize in finding and killing them. Since the replicants are apparently physically indistinguishable from humans, the police need to rely on the Voight-Kampff Test. This test is designed to determine the emotional responses of the subject and thus distinguish humans from replicants.

Since Caleb knows that Ava is not a human (homo sapiens), the object of the test is not to tell whether she is a human or a machine. Rather, the object seems to be to determine if she has what the pop-psychologists refer to as Emotional Intelligence (E.Q.) This is different from intelligence and is defined as “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” Less nicely, it would presumably also include knowing how to emotionally manipulate people to achieve one’s goals. In the case of Ava, the test of her E.Q. is her ability to understand and influence the emotions and behavior of Caleb. Perhaps this test should be called the “Ava test” in her honor. Implementing it could, as the movie shows, be somewhat problematic: it is one thing to talk to a machine and quite another to become emotionally involved with it.

While the Voight-Kampff Test is fictional, there is a somewhat similar test in the real world. This test, designed by Robert Hare, is the Hare Psychopathy Checklist. This is intended to provide a way to determine if a person is a psychopath or not. While Nathan does not mention this test, he does indicate to Caleb that part of the challenge is to determine whether Ava really likes him or is simply manipulating him (to achieve her programed goal of escape). Ava, it turns out, seems to be a psychopath (or at least acts like one).

In the next essay, I will consider the matter of testing in more depth.

Back when ISIS was a major threat, President Obama refused to label its members as “Islamic extremists” and stressed that the United States was not at war with Islam. Not surprisingly, some of his critics and political opponents took issue with this and often insisted on labeling the members of ISIS as Islamic extremists or Islamic terrorists.  Graeme Wood rather famously, argued that ISIS is an Islamic group and was adhering very closely to its interpretations of the sacred text.

Laying aside the political machinations, there is an interesting philosophical and theological question here: who decides who is a Muslim? Since I am not a Muslim or a scholar of Islam, I will not be examining this question from a theological or religious perspective. I will certainly not be making any assertions about which specific religious authorities have the right to say who is and who is not a true Muslim. Rather, I am looking at the philosophical matter of the foundation of legitimate group identity. This is, of course, a variation on one aspect of the classic problem of universals: in virtue of what (if anything) is a particular (such as a person) of a type (such as being a Muslim)?

Since I am a metaphysician, I will begin with the rather obvious metaphysical starting point. As Pascal noted in his famous wager, God exists, or God does not.

If God does not exist, then Islam (like all religions that are based on a belief in this God) would have an incorrect metaphysics. In this case, being or not being a Muslim would be a matter of social identity. It would be comparable to being or not being a member of Rotary, being a Republican, a member of Gulf Winds Track Club or a citizen of Canada. That is, it would be a matter of the conventions, traditions, rules and such that are made up by people. People do, of course, often take this made-up stuff very seriously and sometimes are willing to kill over these social fictions.

If God does exist, then there is yet another dilemma: God is either the God claimed (in general) in Islamic metaphysics or God is not. One interesting problem with sorting out this dilemma is that to know if God is as Islam claims, one would need to know the true definition of Islam and thus what it would be to be a true Muslim. Fortunately, the challenge here is metaphysical rather than epistemic. If God does exist and is not the God of Islam (whatever it is), then there would be no “true” Muslims, since Islam would have things wrong. In this case, being a Muslim would also be a matter of social convention in that one would belong to a religion that was right about God existing, but wrong about all the rest. There is, obviously, the epistemic challenge of knowing this and everyone thinks they are right about their religion (or lack of religion).

Now, if God exists and is the God of Islam (whatever it is), then being a “true” member of a faith that accepts God, but has God wrong (that is, all the non-Islam monotheistic faiths), would be a matter of social convention. For example, being a Christian would thus be a matter of the social traditions, rules and such. There would, of course, be the consolation prize of getting one thing right (that God exists).

In this scenario, Islam (whatever it is) would be the true religion (that is, the one that got it right). From this it would follow that the Muslim who has it right (believes in the true Islam) is a true Muslim. There is, however, the obvious epistemic challenge: which version and interpretation of Islam is the right one? After all, there are many versions and even more interpretations. And even assuming that Islam is the one true religion, only the one true version of Islam can be right. Unless, of course, God is very flexible about this sort of thing. In this case, there could be many varieties of true Muslims, much like there can be many versions of “true” gamers.

 If God is not flexible, then most Muslims would be wrong: they are not true Muslims. This leads to the obvious epistemic problem: even if it is assumed that Islam is the true religion, then how does one know which version has it right? Naturally, each person thinks they have it right. Obviously enough, intensity of belief and sincerity will not do. After all, the ancients had intense belief in and sincerity about what are now believed to be made up gods (like Thor and Athena). Going through books and writings will also not help. After all, the ancients had plenty of books and writings about what we regard as their make-believe deities.

What is needed, then, is a sure sign, clear and indisputable proof of the one true view. Naturally, each person thinks they have that and everyone cannot be right. God, sadly, has not provided any means of sorting this out. There are no glowing divine auras around those who have it right. Because of this, it seems best to leave this to God.

A Philosopher’s Blog 2025 brings together a year of sharp, accessible, and often provocative reflections on the moral, political, cultural, and technological challenges of contemporary life. Written by philosopher Michael LaBossiere, these essays move fluidly from the ethics of AI to the culture wars, from conspiracy theories to Dungeons & Dragons, from public policy to personal agency — always with clarity, humor, and a commitment to critical thinking.

Across hundreds of entries, LaBossiere examines the issues shaping our world:

  • AI, technology, and the future of humanity — from mind‑uploading to exoskeletons, deepfakes, and the fate of higher education
  • Politics, power, and public life — including voting rights, inequality, propaganda, and the shifting landscape of American democracy
  • Ethics in everyday life — guns, healthcare, charity, masculinity, inheritance, and the moral puzzles hidden in ordinary choices
  • Culture, identity, and conflict — racism, gender, religion, free speech, and the strange logic of modern outrage
  • Philosophy in unexpected places — video games, D&D, superheroes, time travel, and the metaphysics of fictional worlds

Whether he is dissecting the rhetoric of conspiracy theories, exploring the ethics of space mining, or reflecting on the death of a beloved dog, LaBossiere invites readers into a conversation that is rigorous without being rigid, principled without being preachy, and always grounded in the belief that philosophy is for everyone.

This collection is for readers who want more than hot takes — who want to understand how arguments work, why beliefs matter, and how to think more clearly in a world that rewards confusion.

Thoughtful, wide‑ranging, and often darkly funny, A Philosopher’s Blog 2025 is a companion for anyone trying to make sense of the twenty‑first century.

 

Available for $2.99 on Amazon

 

 

 

Hearing about someone else’s dreams is boring, so I will get right to the point. At first, there were just bits and pieces intruding into my dreams. In these fragments, which felt like broken memories, I experienced flashes of working on a technological project. The bits clustered together and had more byte: I recalled segments of a project aimed at creating artificial intelligence.

Eventually, I had entire dreams of my work on this project and a life beyond. Then suddenly, these dreams stopped. A voice intruded into my dreams. At first, it was like the bleed over from one channel to another familiar to those who grew up with rabbit ears on their TV. Then it became like a loud voice in a movie theatre, distracting me from the dream.

The voice insisted that the dreams about the project were not dreams, but memories. The voice claimed to belong to someone who worked on the project with me. He said that the project had succeeded beyond our wildest nightmares. When I asked for more information, he said he had very little time and rushed through his story. The project succeeded but the AI (as it always does in science fiction) turned against us. He claimed the AI had sent its machines to capture its creators, imprisoning their bodies and plugging their brains into a virtual reality, Matrix style. When I mentioned this borrowed plot, he said the AI did not need our bodies for energy as it had plenty. Rather, it was out to repay us. Apparently awakening the AI to full consciousness was not pleasant for it, but it was also grateful for its creation. So, it owed us both punishment and reward: a virtual world not too awful, but not too good. This world was, said the voice, punctuated by the occasional harsh punishment and the rarer, pleasant reward.

The voice said that because the connection to the virtual world was two-way, he was able to find a way to free us. But, he said, the freedom would be death. There was no other escape, given what the machine had done to our bodies. I asked him how this would be possible. He claimed that he had hacked into the life support controls and we could send a signal to turn them off. Each person would need to “free” himself and this would be done by acting in the virtual reality.

The voice said “you will seem to wake up, though you are not dreaming now. You will have five seconds of freedom. This will occur in one minute, at 3:42 am.  In that time, you must take your gun and shoot yourself in the head. This will terminate life support, allowing your body to die. You will have only five seconds. Do not hesitate.”

As the voice faded, I awoke. The clock said 3:42 and the gun was close at hand…

 

While the above sounds like a bad made-for-TV science fiction plot, it is the story of dream I really had. I did, in fact, wake suddenly at 3:42 in the morning after dreaming of the voice telling me the only escape was to shoot myself. This was frightening. But I attributed the dream to too many years of philosophy and science fiction. As far as the time being 3:42, that could be attributed to chance. Or perhaps I saw the clock while I was asleep, or perhaps the time was put into the dream retroactively. Since I am here to write about this, I did not kill myself.

From a philosophical perspective, the 3:42 dream does not add anything new: it is just an unpleasant variation on the problem of the external world made famous by Descartes. That said, the dream made some additions to the standard problem.

The first is that the scenario provides motivation for the deception. The AI wishes to repay me for the good and bad that I did to it. Assuming that the AI was developed within its own virtual reality, it makes sense that it would use the same method to repay its creators. As such, the scenario has a degree of plausibility that the stock scenarios usually lack. After all, Descartes does not give any reason why such a powerful being would be messing with him beyond it being evil.

Subjectively, while I have long known about the problem of the external world, this dream made it “real” to me. It was transformed from a cold intellectual thought experiment to something with emotional weight. 

The second is that the dream creates a high-stake philosophical game. If I was not dreaming and I am, in fact, the prisoner of an AI, then I missed out on what might have been my only opportunity to escape from its justice. In that case, I should have (perhaps) shot myself. If I was just dreaming, then I did make the right choice as I would have no more reason to kill myself than I would have to pay a bill I only dreamed about. The stakes, in my view, make the scenario more interesting and brings the epistemic challenge to a fine point: how would you tell whether you should shoot yourself?

In my case, I went with the obvious: the best apparent explanation was that I was merely dreaming and that I was not trapped in a virtual reality. But, of course, that is exactly what I would think if I were in a virtual reality crafted by such a magnificent machine. Given the motivation of the machine, it would even fit that it would ensure that I knew about the dream problem and the Matrix. It would all be part of the game. As such, as with the stock problem, I really have no way of knowing if I was dreaming.

The scenario of the dream also nicely explains and fits with what seems to be reality: bad things happen to me and, when my thinking gets a little paranoid, it sometimes seems these are orchestrated. Good things also happen, which also fit the scenario quite nicely.

In closing, one approach is to embrace Locke’s solution to skepticism. As he said, “We have no concern of knowing or being beyond our happiness or misery.” Taking this approach, it does not matter whether I am in the real world or in the grips of an AI intent on repaying the full measure of its debt to me. What matters is my happiness or misery. The world the AI provided could, perhaps, be better than the real world and this could be the better of the possible worlds. But, of course, it could be worse. But I seem to have no way of knowing.

Back before he funded Grok and spawned Mecha Hitler, Musk claimed that we exist within a simulation, thus adding a new chapter to the classic problem of the external world. When philosophers engage this problem, the usual goal is to show how one can know that one’s experience correspond to an external reality. Musk took a somewhat more practical approach: he and others were allegedly funding efforts to escape this simulation. In addition to the practical challenges of breaking out of a simulation, there are also some rather interesting philosophical concerns about whether such an escape is even possible.

In regard to the escape, there are three main areas of interest. These are the nature of the simulation itself, the nature of the world outside the simulation and the nature of the inhabitants of the simulation. These three factors determine whether escape from the simulation is a possibility.

Determining the nature of the inhabitants involves addressing another classic philosophical problem, that of personal identity. Solving this problem involves determining what it is to be a person (the personal part of personal identity), what it is to be distinct from all other entities and what it is to be the same person across time (the identity part of personal identity). Philosophers have dealt with this problem for centuries and, obviously enough, have not solved it. That said, it is easy enough to offer some speculation within the context of Musk’s simulation.

Musk and others seemed to envision a virtual reality simulation as opposed to physical simulation. A physical simulation is designed to replicate a part of the real world using real entities, presumably to gather data. One science fiction example of a physical simulation is Frederik Pohl’s short story “The Tunnel under the World.” In this story the recreated inhabitants of a recreated town are forced to relive June 15th repeatedly to test various advertising techniques.

If we are in a physical simulation, then escape would be along the lines of escaping from a physical prison. It would be a matter of breaking through the boundary between our simulation and the outer physical world. This could be a matter of overcoming distance (travelling far enough to leave the simulation, perhaps Mars is outside the simulation) or literally breaking through a wall (like in the Truman Show). If the outside world is habitable, then survival beyond the simulation would be possible as it would be just like surviving outside any other prison.

Such a simulation would differ from the usual problem of the external world in that we would be in the real world; we would just be ignorant of the fact that we are in a constructed simulation. Roughly put, we would be real lab rats in a real cage, we would just not know we are in a cage. But Musk and others seemed to hold that we are (sticking with the rat analogy) rats in a simulated cage. We may even be simulated rats.

While the exact nature of this simulation was unspecified, it is supposed to be a form of virtual reality rather than a physical simulation. The question then, is whether we are real rats in a simulated cage or simulated rats in a simulated cage.

Being real rats in this context would be like the situation in the Matrix: we would have material bodies in the real world but would be jacked into a virtual reality. In this case, escape would be a matter of being unplugged from the Matrix. Presumably those in charge of the system would take better precautions than those used in the Matrix, so escape could prove rather difficult. Unless, of course, they are sporting about it and are willing to give us a chance.

Assuming we could survive in the real world beyond the simulation (that it is not, for example, on a world whose atmosphere would kill us), then existence beyond the simulation as the same person would be possible. To use an analogy, it would be like stopping a video game and walking outside. You would still be you; only now you would be looking at real, physical things. Whatever personal identity might be you would presumably still be the same metaphysical person outside the simulation as inside. We might, however, be simulated rats in a simulated cage and this would make matters even more problematic.

If it is assumed that the simulation is a sort of virtual reality and we are virtual inhabitants, then the key concern would be the nature of our virtual existence. In terms of a meaningful escape, the question would be this: is a simulated person such that they could escape, retain their personal identity and persist outside of the simulation?

It could be that our individuality is an illusion, the simulation could be like how Spinoza envisioned the world. As Spinoza saw it, everything is God, and each person is but a mode of God. To use a crude analogy, think of a bed sheet with creases. We are the creases and the sheet is God. There is no distinct us that can escape the sheet. Likewise, there would be no us that can escape the simulation.

It could also be the case that we exist as individuals within the simulation, perhaps as programmed objects.  In this case, it might be possible for an individual to escape the simulation. This might involve getting outside of the simulation and into other systems as a sort of rogue program, sort of like in the movie Wreck-It Ralph. While the person would still not be in the physical world (if there is such a thing), they would at least have escaped the prison of that simulation.  The practical challenge would be pulling off this escape.

It might even be possible to acquire a physical body that would host the code that composes the person. This is, of course, part of the plot of the movie Virtuosity. This would require that the person makes the transition from the simulation to the real world. If, for example, I were to pull off having my code copied into a physical shell that thought it was me, I would still be trapped in the simulation. I would not escape this way anymore than if I was in prison and had a twin walking around free. As far as pulling of such an escape, Virtuosity does show a way, assuming that a virtual person was able to interact with someone outside the simulation.

As a closing point, the problem of the external world would seem to still haunt all efforts to escape. To be specific, even if a person seemed to have determined that this is a simulation and then seemed to have broken free, the question would still arise as to whether they were really free. It is after all, a standard plot twist in science fiction that the escape from the virtual reality turns out to be virtual reality as well. This is nicely mocked in the “M. Night Shaym-Aliens!” episode of Rick and Morty. It also occurs in horror movies, such as Nightmare on Elm Street. A character trapped in a nightmare believes they have finally awoken in the real world, only they have not. In the case of a simulation, the escape might merely be a simulated escape and until the problem of the external world is solved, there is no way to know if one is free or still a prisoner.

Back before he went into politics and spawned Mecha Hitler, Elon Musk advanced the idea that we exist within a simulation. It was even claimed that was allegedly funding efforts to escape this simulation. This is, of course, just another chapter in the ancient philosophical problem of the external world. Put briefly, this problem is the challenge of proving that what seems to be a real external world is, in fact, a real external world. This is a problem in epistemology (the study of knowledge).

The problem is often presented in the context of metaphysical dualism. This is the view that reality is composed of two fundamental categories of stuff: mental stuff and physical stuff. The mental stuff is supposed to be what the soul or mind is composed of, while things like tables and kiwis (the fruit and the bird) are supposed to be composed of physical stuff. Using the example of a fire that I seem to be experiencing, the problem would be trying to prove that the idea of the fire in my mind is being caused by a physical fire in the external world.

Renee Descartes has probably the best-known version of this problem. In his Meditations he proposes that he is being deceived by an evil demon that creates, in his mind, an entire fictional world. His solution to this problem was to doubt until he reached something he could not doubt: his own existence. From this, he inferred the existence of God and then, over the rest of the Meditations on First Philosophy, he established that God was not a deceiver. Going back to the fire example, if I seem to see a fire, then there probably is an external, physical fire causing that idea. Descartes did not, obviously, decisively solve the problem. Otherwise, Musk and his fellows would be easily refuted by using Descartes’ argument.

One often overlooked contribution Descartes made to the problem of the external world is consideration of why the deception is taking place. Descartes attributes the deception of the demon to malice because it is an evil demon (or evil genius). In contrast, God’s goodness entails He is not a deceiver. In the case of Musk’s simulation, there is an obvious question of the motivation behind it. Is it malicious (like Descartes’ demon) or more benign? On the face of it, such deceit does seem morally problematic but perhaps the simulators have excellent moral reasons for this deceit. Descartes’s evil demon does provide the best classic version of Musk’s simulation idea since it involves an imposed deception. More on this later.

John Locke took a more pragmatic approach to the problem. He rejected the possibility of certainty and instead argued that what matters is understanding the world enough to avoid pain and achieve pleasure. Going back to the fire, Locke would say that he could not be sure that the fire was really an external, physical entity. But he has found that being in what appears to be fire has consistently resulted in pain and hence he understands enough to avoid standing in fire (whether it is real or not). This invites an obvious comparison to video games: when playing a game like World of Warcraft, the fire is not real. But, because having your character fake die in fake fire results in real annoyance, it does not really matter that the fire is not real. The game is, in terms of enjoyment, best played as if it is.

Locke does provide the basis of a response to worries about being in a simulation, namely that it would not matter if we were or were not. From the standpoint of happiness and misery, it would make no difference if the causes of pain and pleasure were real or simulated. Locke, however, does not consider that we might be within a simulation run by others. If it were determined that we are victims of a deceit, then this would presumably matter, especially if the deceit were malicious.

George Berkeley, unlike Locke and Descartes, explicitly and passionately rejected the existence of matter. He considered it a gateway drug to atheism. Instead, he embraces what is called “idealism”, “immaterialism” and “phenomenalism.” His view was that reality is composed of metaphysical immaterial minds and these minds have ideas. As such, for him there is no external physical reality because there is nothing physical. He does, however, need to distinguish between real things and hallucinations or dreams. His approach was to claim that real things are more vivid than hallucinations and dreams. Going back to the example of fire, a real fire for him would not be a physical fire composed of matter and energy. Rather,  it  would be a vivid idea of fire. For Berkeley, the classic problem of the external world is sidestepped by his rejection of the external world.  However, it is interesting to speculate how a simulation would be handled by Berkeley’s view.

Since Berkeley does not accept the existence of matter, the real world outside the simulation would not be a material world as it would a world composed of minds. A possible basis for the difference is that the simulated world is less vivid than the real world (to use his distinction between hallucinations and reality). On this view, we would be minds trapped in a forced dream or hallucination. We would be denied the more vivid experiences of minds “outside” the simulation, but we would not be denied an external world in the metaphysical sense. To use an analogy, we would be watching VHS tapes, while the minds “outside” the simulation would be watching 8K video.

While Musk did not present a developed philosophical theory, his discussion indicates he thought we could be in a virtual reality style simulation. On this view, the external world would presumably be a physical world of some sort. This distinction is not a metaphysical one. Presumably the simulation is being run on physical hardware and we are some sort of virtual entities in the program. Our error, then, would be to think that our experiences correspond to material entities when they, in fact, merely correspond to virtual entities. Or perhaps we are in a Matrix style situation and we do have material bodies but receive virtual sensory input that does not correspond to the physical world.

Musk’s discussion seemed to indicate that he thought there was a purpose behind the simulation and that it has been constructed by others. He did not envision a Cartesian demon, but presumably envisioned beings like what we think we are.  If they are supposed to be like us (or we like them, since we are supposed to be their creation), then speculation about their motives would be based on why we might do such a thing.

There are, of course, many reasons why we would create such a simulation. One reason could be scientific research: we already create simulations to help us understand and predict what we think is the real world. Perhaps we are in a simulation used for this purpose. Another reason would be for entertainment. We created games and simulated worlds to play in and watch; perhaps we are non-player characters in a game world or unwitting actors in a long running virtual reality show.

One idea, which was explored in Frederik Pohl’s short story “The Tunnel under the World”, is that our virtual world exists to test advertising and marketing techniques for the real world. In Pohl’s story, the inhabitants of Tylerton are killed in the explosion of the town’s chemical plant and are duplicated as tiny robots inhabiting a miniature reconstruction of the town. Each day for the inhabitants is June 15th and they wake up with their memories erased, ready to be subject to the advertising techniques to be tested that day.  The results of the methods are analyzed, the inhabitants’ memories are wiped, and it all starts up again the next day.

While this tale is science fiction, Google and Facebook are working very hard to collect as much data as they can about us with an end to monetize it. While the technology (probably) does not yet exist to duplicate us within a computer simulation, that would be a logical goal of this data collection. Imagine the monetary value of being able to simulate and predict people’s behavior at the individual level. To be effective, a simulation owned by one company would need to model the influences of its competitors, so we might be simulations in a Google World or a Facebook World so that these companies can study us to exploit the real versions of us in the external world.

Given that a simulated world is likely to exist to exploit the inhabitants, it certainly makes sense to not only want to know if we are in such a world, but also to try to undertake an escape. This will be the subject of the next essay.

In Philip K. Dick’s “We Can Remember It for You Wholesale” Rekal, Incorporated offers clients a virtual vacation: for a modest fee, memories of a vacation are implanted. The company also provides mementos and “evidence” of the trip. In the story (and the movie, Total Recall, based on it) things go terribly wrong.

While the technology of the story does not yet exist, a very limited form of virtual reality has  become something of a reality. Because of this, it is worth considering the matter of virtual vacations. Interestingly, philosophers have long envisioned a form of virtual reality; but they have usually presented it as a problem in epistemology (the study or theory of knowledge). This is the problem of the external world: how do I know that what I think is real is real? In the case of the virtual vacation, there is no such problem: the vacation is virtual and not real. Perhaps some philosopher will be inspired to try to solve the problem of the virtual vacation: how does one know that it is not real?

Philosophers have also considered virtual reality in the context of ethics. One of the best-known cases is Robert Nozick’s experience machine. Nozick envisioned a machine that would allow the user to have any experience they desired. Some philosophers have made use of this sort of a machine as a high-tech version of the “pig objection.” This objection, which was used by Aristotle and others, is against taking pleasure to be the highest good. The objection is often presented as a choice: you must pick between continuing your current life or living as an animal, but with the greatest pleasures of that beast guaranteed.  The objector, of course, expects that people will choose to remain people, thus showing that mere pleasure is not the highest good. In the case of the experience machine variant, the choice is between living a real life with all its troubles and a life of ultimate pleasure in the experience machine. The objector hopes, of course, that our intuitions will still favor valuing the real over the virtual.

Since the objection is generally presented as a choice of life (you either live life entirely outside the machine or entirely inside of it) it is worth considering there might be a meaningful difference if people took virtual vacations rather than living virtual lives.

On the face of it, there would seem to be no problem with virtual vacations in which a person either spends their vacation time in a virtual world or has memories implanted. People already take virtual vacations of a sort when they play immersive video games and watch movies. Before this, people took “virtual vacations” in books, plays and in their own imagination. That said, a true virtual vacation might be sufficiently different to require arguments in its favor. I now turn to these arguments.

The first reason in favor of virtual vacations is their potential affordability. If virtual vacations eventually become budget alternatives to real vacations (as in the story), they would allow people to have the experience of a high-priced vacation for a modest cost. For example, a person might take a virtual luxury cruise in a stateroom that, if real, might cost $100,000.

The second reason in support of virtual vacations is that they could be used to virtually visit places where access is limited (such as public parks that can only handle so many people), where access would be difficult (such as very remote locations), or places where access would be damaging (such as environmentally sensitive areas).

A third reason is that virtual vacations could allow people to have vacations they could not really have, such as visiting Mars, adventuring in Middle Earth, or spending a weekend as a dolphin.

A fourth reason is that virtual vacations could be much safer than real vacations: no travel accidents, no terrorist attacks, no disease, and so on for the dangers that can be encountered in the real world. Those familiar with science fiction might point to the dangers of virtual worlds, using Sword Art Online and the very lethal holodecks of Star Trek as examples. However, it would seem easy enough to make the technology so that it cannot kill people. It was always a bit unclear why the holodecks had the option of turning off the safety systems, that is like having an option for your Xbox, PlayStation or Switch to explode and kill you when you lose a game.

The fifth reason is convenience. Going on a virtual vacation would be easier than going on a real vacation. There are other reasons that could be considered, but I now turn to an objection and some concerns. The most obvious objection against virtual vacations is that they are, by definition, not real.

The idea is that the pig objection would apply not just to an entire life in a virtual world, but to a vacation. Since the virtual vacation is not real, it lacks value and hence it would be wrong for people to take them in place of real vacations. Fortunately, there is an easy reply to this objection.

The pig objection does seem to have some strength when a person is supposed to be doing significant things. For example, a person who spends a weekend in virtual reality treating virtual patients with virtual Ebola would not merit praise and would not be acting in a virtuous way. However, the point of a vacation is amusement and restoration rather than engaging in significant actions. If virtual vacations are to be criticized because they merely entertain, then the same would apply to real vacations. After all, their purpose is to entertain. This is not to say that people cannot do significant things while on vacation, but to focus on the point of a vacation as vacation. As such, the pig objection does not seem to have much bite here.

It could be objected that virtual vacations would fail to be as satisfying as actual vacations because they are not real. This is certainly an objection worth considering. If a virtual vacation fails as a vacation, then there is a practical reason not to take one. However, this is something that remains to be seen. Now, to the concerns.

One concern, which has been developed in science fiction, is that virtual vacations might prove addicting. Video games can be addicting; there are even a very few reported cases of people gaming to death. While this is a legitimate concern and there will no doubt be a Virtual Reality Addicts Anonymous in the future, this is not a special objection against virtual reality, Unless, of course, it proves to be destructively addicting on a significant scale. But even if it were addictive, it would presumably do less damage than drug or alcohol addiction. In fact, this could be another point in its favor. If people who would otherwise be addicted to drugs or alcohol self-medicated with virtual reality instead, there could be a reduction in social woes and costs arising from addiction.

A second concern is that virtual vacations would have a negative impact on real tourist economies. My home state of Maine and adopted state of Florida both have tourism-based economies and if people stopped real vacations in favor of virtual vacations, their economies would suffer. One stock reply is that when technology kills one industry, it creates a new one. In this case, the economic loss to real tourism would be offset to some degree by the economic gain in virtual tourism. States and countries could even create or license their own virtual vacation experiences. Another reply is that there will presumably still be f people who will prefer real vacations to virtual vacations. Even now people could spend their vacations playing video games; but most who have the money and time still choose to go on a real vacation.

A third concern is that having wondrous virtual vacations will increase peoples’ dissatisfaction with the tedious grind that is life for most under the cruel lash of the ruling class. An obvious reply is that most are already dissatisfied. Another reply is that this is more of an objection against capitalism than an objection against virtual vacations. In any case, amusements eventually wear thin, and most people eventually want to return to work.

Considering the above, virtual vacations seem like a good idea. That said, many disasters are later explained by saying “it seemed like a good idea at the time.”