While Florida Agricultural & Mechanical University has obviously been concerned with preparing students for careers, this semester I learned that we are explicitly moving away from the idea of education having intrinsic value and instead embracing workforce readiness.

To be fair and balanced, this can be seen as an acknowledgement of reality: most of my students have always been rationally focused on education as a means to a career. This also has clear practical value as our students, for unless they have inherited great wealth, will need to labor to survive. But a case can be made that the main beneficiaries of a university focus on workforce readiness are businesses and the political right.

First, workforce readiness helps shift the cost of workforce training from businesses to students (and taxpayers). The old model was that universities sent students to their employers ready to learn the specifics of their jobs. This seemed a reasonable approach, as the specific skills needed varied with each job and could change over the four (plus) years required for a student to graduate. This is still true, which is why most businesses now want employees with experience—they have specific, current skills and the business does not need to spend resources to train them. My university has started requiring all majors to include an internship as an elective, which can benefit the students but will, one infers, provide businesses with free labor.

It is well worth considering some of the practical problems with trying to train students to be workforce ready. One concern is that education focused on workforce readiness can become obsolete. Students take 4+ years to graduate, and it takes time for departments to update and implement curriculum.  There is also the obvious problem of trying to get students ready for a diverse range of jobs that require different skills and knowledge that previously required on the job training. Since philosophy majors could go on to do jobs ranging from managing a business to being the vice president, it is not clear how one would workforce ready students in a way that differs from the current approach to education.  

My university is also embracing AI, which makes sense. However, readying students for the workforce in the age of AI presents a dilemma. If AI is a bubble that bursts, then getting them ready for the AI workforce that will not exist will leave them unprepared for the world that will be. But if AI is not a bubble (or is an enduring bubble) then we might also be preparing them for jobs that AI will replace. The example of AI can be generalized to the workforce dilemma: If we do not prepare them for a specific job, they are not workforce ready and businesses will not want to hire them. Instead, they will continue the practice of hiring experienced workers. If we prepare students for a specific job, that job might not exist when they graduate or their skills might be obsolete. In pushing for workforce readiness we might find that we are abandoning an imperfect educational approach in favor of one that is even worse.

A second benefit of a focus on workforce readiness is that if it succeeds, then it will decrease the value of labor. This, obviously, is a benefit for businesses and not students. This devaluing will arise from two factors. One arises from the positive focus on workforce readiness. If this creates more workers, then the value of each worker is thus diminished—which will benefit businesses. The other arises from a negative factor, which is the effort to reduce or eliminate degrees and programs that are perceived as not focused on creating workforce ready products for what will be the true consumer of education, the businesses. Success in reducing or exterminating such programs will provide benefits to business and the political right. Students who would otherwise have entered these programs will probably end up getting workforce ready degrees, thus increasing the workforce and decreasing the value of labor in these areas. The areas targeted for reduction or elimination often produce graduates who are critical of the harmful practices of businesses (like exploiting labor, polluting the environment, and producing harmful products and services). Hence, thinning their numbers is advantageous. These graduates are also often critical of racism, sexism, inequality, fascism, authoritarianism, and other such evils, which tends to put them at odds with the political right (who tend to favor business as well).

As a philosopher, I unsurprisingly think that education can have intrinsic value. You know, the idea of the examined life and all that stuff. However, there are also practical reasons to be concerned. While a focus on workforce readiness might yield short term benefits, there are long term harms to be considered. After all, as fans of Western civilization themselves love to point out, the old universities have been critical in making this civilization, its economy and its technology possible—and this goes back to Plato’s academy. There is also the very practical concern, as noted above, that workforce readiness might simply not work—especially with the uncertainty about AI. In closing, while I do understand why businesses want to shift training costs onto students and the taxpayers (as many of them have shifted costs by exploiting the SNAP and welfare systems), this is unethical. Businesses should pay to train the workers who will provide them with their profits. They have the resources to do so and, from a practical standpoint, they would be the best at providing the very specific and most current skills needed for their very specific job.

While philosophy is about inquiry and students should ask questions, there was a question I hoped students would not ask. That question was “do I need the book?” In some cases, this question arose from the challenge of limited finances. In other cases, it arose from a profound hope to avoid the pain of reading philosophy.

My answer was always an honest “yes.” As opposed to a dishonest “yes.” I must confess that in years gone by I heard the whispers of the Book Devil trying to tempt me to line my shelves with desk copies or, even worse, get free books to sell to the book buyers. In the before time, publishers often sent free copies to professors. Those days have passed.

But I always resisted the temptation. My will was fortified by memories from my student days of buying expensive books we never used. Even though the books for my courses were truly required and I sought the best books for the lowest costs, students still lamented my cruel practice of requiring books.

Moved by their suffering, I found a solution in technology. Since most of the great (and not-so-great) philosophers are not only dead but really, really dead, their works are usually in the public domain. This allowed me to assemble free texts for most of my classes. These were first distributed via 3.5 inch floppies (kids, ask your parents about these), then via the internet. While I could not include the latest and (allegedly) greatest contemporary philosophy, these free digital books are as good as most of the expensive offerings. The students are, I am pleased to say, happy that the books they will not read will not cost them a penny. Yes, sometimes students ask, “do I have to read the book?” I, of course, say “yes.” We smile and pretend that they will read the book.

As I make a point of telling the students on day one that the book is a free PDF file, I rarely hear “do I need to buy the book?” Now students ask, “do I have to come to class?” I must take some of the blame for this, thanks to COVID my classes are designed so all the coursework can be completed online via Canvas. Technology is thus shown, once again, to be a two-edged sword: it solved the “do I have to buy the book?” problem but helped create the “do I have to come to class problem.”

When I was first asked this, I remember feeling a bit annoyed by the question. After all, the question seems to imply that the student is thinking: “I have nothing to learn from you, but I don’t want to fail.” Honesty compels me to admit that a student might have nothing to learn from me. After all, there are arguments that philosophy is useless and presumably not worth learning. Alos, Things like logic, critical thinking and ethics could be worthless—after all, some people seem to do just fine without them. Some even manage to hold high offices and accumulate fortunes without any of these. I could also be useless in particular.

After overcoming my initial annoyance, I applied some philosophical thought to the matter. As with the “do I have to buy the book?” question, there could be a good reason for the question. Perhaps the student needs the time that would otherwise be spent in my class to do things for other classes or need the time to work to earn money to pay for school.

Out of curiosity I created an anonymous survey to see what the students would say.  28.8% of claimed work was the primary reason they missed class. 15% claimed that the fact that they could turn in work online was the reason they skipped class. 6% claimed they needed to spend time on other classes. These were the top three.

While the survey was anonymous, respondents might be inclined to select the answer that seems the most laudable reason to miss class. That said, these results are plausible. One reason is that many of my students are from low-income families and often need to work to pay for school. Another reason is that I routinely overhear students talking about their jobs and I sometimes even see students wearing their work uniforms in class.

While it might be suspected that my concern about attendance is a matter of ego, it is based on concern for my students. In addition to being curious about why students were skipping my class, I was also interested in why students failed my courses. Fortunately, I had objective data in the form of attendance records, grades, and coursework.

As would be expected, I found a correlation between missing class and failing grades when I went through a few years of classes. None of the students who failed had perfect attendance and only 27% had better than 50% attendance. This was hardly surprising: students who do not attend class miss out on the lectures, class discussion and the opportunity to ask questions. To use the obvious analogy, these students are like athletes skipping practice. But it must also be noted that there are other factors that can cause students to miss class and also do poorly, such as lack of interest and life problems.

Over the years I have tested a solution to this problem. Even before the pandemic, I created YouTube videos of one of my classes and put the links into BlackBoard.  Thanks to the Pandemic, most of my classes have “decent” videos of all the content. This allows students to view (or ignore) the videos at their convenience and skip or rewind as they desire. As might be suspected , the view counts are very low. However, some students expressed appreciation for the availability of the videos. If they can reduce the number of students who fail by even a few students each semester, then the effort will be worthwhile.

I also found that 67.7% of the students who failed did so because of failing scores on work. While this might elicit a response of “duh”, 51% of those who failed did not complete the exams, 45% did not complete the quizzes, and 42% did not complete the paper. While failing grades on the work was a major factor, simply not doing the work was a significant cause. I did find that no student who ever failed my class completed all the work and this was part of the reason for the failure. While they might have failed the work even if they had completed it, failure was assured by not making the attempt.

My initial attempt at solving the problem involved having all coursework either on Black Board or capable of being turned in via Black Board. My obvious concern with this solution was the possibility that students would cheat. While there are some awkward and expensive solutions (such as video monitoring) I decided to rely on something I had learned about the homework assigned in my courses: despite having every opportunity to cheat, student performance on out of class work was consistent with their performance on monitored in course work. It was simply a matter of designing questions and tests to make cheating unrewarding. The solution was easy: questions aimed at comprehension, a tight time limit on exams, and massive question banks to generate random exams. This approach worked for years: student grades remained very close to those from the days or proctored in-class exams and quizzes. On the plus side, there was an increase in the completion rate of the coursework. However, the increase was not as significant as I had hoped. Then AI arrived and enabled easy cheating on online quizzes and exams, thus creating a problem whose obvious solution seems to be a return to proctored in-class exams and quizzes.

To address the problem of uncompleted work, I decided to have generous deadlines. Originally, students got a month to complete the quizzes for a section. For exams 1-3 (which cover sections 1-3), students got one month after we finished a section to complete the exam. Exam 4’s deadline was at the end of the last day of classes and the final deadlines at the end of the normal final time. The paper deadlines were unchanged from the pre-online days, although now the students can turn in papers from anywhere with internet access and can do so round the clock. The main impact of this change was another increase in the completion rate of work, thus decreasing the failure rate in my classes. When COVID hit, I made the deadlines even more generous for exams and quizzes: students can complete these for full credit up until the last day of finals week. This increased the completion rates for the coursework and, I must say, removed much of the end of the semester stress arising from addressing student grade crises.

As would be suspected, there are still students who do not complete all the work and fail much of the work they do complete. But the number of failing students has been reduced dramatically, and they are still learning. But, as noted earlier, the newest challenge is AI: while cheating has always been a problem, AI has obviously turbocharged this problem.

As a follow up to the war on CRT (Critical Race Theory) and wokeness, the right has waged a largely successful war on DEI (Diversity, Equity and Inclusivity). While I take a favorable view of DEI, I recognize that DEI efforts sometimes suffered from corruption and inefficiency. I also acknowledge (and criticize) that some of it was purely performative. This is to say that the efforts of DEI were just like other human efforts, which gives us no special reason to criticize it in particular for these failings. But these are flaws that should be addressed, whether they be in DEI programs or the operations of the Pentagon. Despite these flaws, there are  good reasons in favor of DEI. And, of course, arguments against DEI.

One justification for DEI efforts is that they are supposed to offset past unfairness, discrimination, and injustice. That is, they are warranted on the moral grounds that they address past wrongs. A standard concern about this justification is that it can be seen as addressing past discrimination by engaging in present discrimination. As an illustrating anecdote, when I was applying for a job during my last year of grad school, I and my fellow white male philosophers were worried that our chances of getting a job would be lower because schools appeared to be addressing past discrimination in hiring by what seemed might be present discrimination in hiring. That is, that we white males of the (then) present would be sacrificed to atone for the sins of the white males of the past. While it is tempting to dismiss such concerns, there is a reasonable moral concern about fairness. I recall that there were serious suggestions that the old white guys should step down to open more jobs for women and minorities. After all, to the degree they “earned” their jobs because of past discrimination and exclusion, would it not be fair that they be the ones to pay the price demanded by justice? This approach and its consequences do raise moral concerns about individual justice and justice for groups. Being philosophers, we did consider that even if we, as individuals, were treated unfairly during the hiring process, this might still be morally justified. Those of us inclined to difficult self-reflection also considered that we might have been under the influence of racism and sexism when thinking that we might be treated unfairly simply in virtue of being white men. Because of my own experience, I can understand how people might feel about DEI. My considered view is that while there can be cases where white men are treated unfairly, concerns about addressing past wrongdoing are morally relevant on utilitarian grounds. Also, virtue theory supports this: it is better to err on the side of addressing a greater injustice rather than refusing to do so out of an exaggerated fear of the possibility of a lesser injustice.

A second reason in favor of DEI efforts is that they can address existing unfairness and discrimination. For example, funding programs for minority owned businesses can be seen as helping to offset the discrimination against minorities in the realm of finance. As another example, a scholarship for female students in the sciences can be seen as offsetting the bias against women in the sciences.

Such efforts can, of course, be interpreted as unfair. For example, a white business owner might argue that funding only available to minorities is unfair to her. As another example, a male student could contend that it is unfair that he cannot get the scholarship that a woman can. While such arguments can be made in good faith, they are often made in bad faith by people who know that, for example, white business owners are more likely to get loans than minority business owners (even when they are financially equal)—so white business owners already have an unfair advantage. Good faith reasoning requires that we consider the full context and not just take each alleged unfairness in ahistorical isolation. For example, in isolation it might seem unfair if funding or a scholarship were not available to everyone. But if one group already enjoys an unfair advantage, attempting to offset that helps restore fairness. Unfortunately, many unfair advantages are hidden and exposing them often requires good faith analysis and interpretation. To illustrate, banks obviously do not advertise special white-only rates for home loans, but these exist in practice. As such, explicit efforts to provide fair loans to minority home buyers can appear unfair, since they explicitly exclude while the exclusions in practice are usually concealed.

A third reason in favor of DEI efforts is that they can aim at allowing fair consideration of and opportunities for people who would otherwise be excluded. Going back to my example of academic hiring, academic philosophy was (and is) a mostly white male field and it took intentional effort for highly qualified women and minorities to even be considered for professorships. In the case of competitions for such things as jobs or scholarships, this approach increases fairness by preventing people from being excluded simply because of their race, gender, age, etc.

The usual criticism of this is that DEI efforts are not really aimed at providing equal consideration and fairness, but are intended to provide an unearned advantage to some people based on their identity. While such criticisms can be made in good faith, they are often made in bad faith based on racism and sexism. I will discuss this in my next essay in this series as I look at how the American right works to erase and whitewash history as part of its attack on DEI efforts.

And this, my friend, may be conceived to be that heavy, weighty, earthy element of sight by which such a soul is depressed and dragged down again into the visible world, because she is afraid of the invisible and of the world below-prowling about tombs and sepulchers, in the neighborhood of which, as they tell us, are seen certain ghostly apparitions of souls which have not departed pure, but are cloyed with sight and therefore visible.

-Plato’s Phaedo

 

While ghosts have long haunted the minds of humans, philosophers have said relatively little about them. Plato, in the Phaedo, briefly discussed ghosts in the context of the soul. Centuries later, my “Ghosts & Minds” manifested in the Philosophers’ Magazine and then re-appeared in my What Don’t You Know? In the grand tradition of horror movie remakes, I have decided to re-visit the ghosts of philosophy and write about them once more.

The first step in ghostly adventures is laying out a working definition of “ghost.” In the classic tales of horror and role playing game such as Call of Cthulhu and Pathfinder ghosts are undead manifestations of souls that once inhabited living bodies. These ghosts are incorporeal or, in philosophical terms, they are immaterial minds. In the realm of fiction and games, there is a variety of incorporeal undead: ghosts, shadows, wraiths, specters, poltergeists, and many others. I will, however, stick with a basic sort of ghost and not get bogged down in the various subspecies of spirits.

A basic ghost must possess certain qualities. The first is that a ghost must have lost its original body due to death. The second is that a ghost must retain the core metaphysical identity it possessed in life. That is, the ghost of a dead person must still be that person, and the ghost of a dead animal must still be that animal. This is to distinguish a proper ghost from a mere phantasm or residue. A ghost can, of course, have changes in its mental features. For example, some fictional ghosts become single-mindedly focused on revenge and suffer a degradation of their more human qualities. The third requirement is that the ghost must not have a new “permanent” body (this would be reincarnation), although temporary possession does not count against this. The final requirement is that the ghost must be capable of interacting with the physical world in some manner. This might involve being able to manifest to the sight of the living, to change temperatures, to cause static on a TV, or to inflict a bizarre death. This condition can be used to distinguish a ghost from a spirit that is in a better (or worse) place. After all, it would be odd to say that Heaven is haunted. Or perhaps not.

While the stock ghost of fiction and games is incorporeal entity (an immaterial mind), it should not be assumed that this is a necessary condition for being a ghost. This is to avoid begging the question against non-dualist accounts of ghosts. Now that the groundwork has been put in place, it is time to move on to the ghosts.

The easy and obvious approach to the ghosts of philosophy is to simply stick with the standard ghost. This ghost, as noted above, fits in nicely with classic dualism. This is the philosophical view that there are two basic metaphysical kinds: the material stuff (which might be a substance or properties) and the immaterial stuff. Put in everyday terms, these are the body and the soul.

On this view, a ghost would arise upon the death of a body that was inhabited by a mind. Since the mind is metaphysically distinct from the body, it would be possible for it to survive the death of the body. Since the mind is the person, the ghost would presumably remain a person—though being dead might have some psychological impact.

One of the main problems for dualism is the mind-body problem, which vexed the dualist Descartes and his successors. This is the mystery of how the immaterial mind interacts with the material body. While this is mysterious, the interaction of the disembodied mind with the material world is not a greater mystery. After all, if the mind can work the levers of the brain, it could presumably interact with other material objects. Naturally, it could be objected that the mind needs a certain sort of matter to work with—but the principle of ghosts interacting with the world is no more mysterious than the immaterial mind interacting with the material body. And no less mysterious.

Non-dualist metaphysical views would seem to have problems with ghosts. One such view is philosophical materialism (also known as physicalism). Unlike everyday materialism, this is not a love of fancy cars, big houses and shiny bling. Rather, it is the philosophical view that all that exists is material. This view explicitly denies the existence of immaterial entities such as spirits and souls. There can still be minds—but they must be physical in nature.

On the face of it, materialism would seem to preclude the existence of ghosts. After all, if the person is their body, then when the body dies, then that is the end of the person. As such, while materialism is consistent with corporeal undead such as zombies, ghouls and vampires, ghosts would seem to out. Or are they?

One approach is to accept the existence of material ghosts—the original body dies and the mind persists as some sort of material object. This might be the ectoplasm of fiction or perhaps a fine cloud. It might even be a form of energy that is properly material. These would be material ghosts in the material world. Such material ghosts would presumably be able to interact with the other material objects—though this might be limited.

Another approach is to accept the existence of functional ghosts. One popular theory of mind is functionalism, which seems to be the result of thinking that the mind is like a computer. For a functionalist a mental state, such as being afraid of ghosts, is defined in terms of the causal relations it holds to external influences, other mental states, and bodily behavior.  Rather crudely put, a person is a set of functions and if those functions survived the death of the body and were able to interact in some manner with the physical world, then there could be functional ghosts. Such functional ghosts might be regarded as breaking one of the ghost rules in that they might require some sort of new body, such as a computer, a house, or a mechanical shell. In such cases, the survival of the function set of the dead person would be a case of reincarnation—although there is certainly a precedent in fiction for calling such entities “ghosts” even when they are in shells.

Another option, which would still avoid dualism, is for the functions to be instantiated in a non-physical manner (using the term “physical” in the popular sense). For example, the functional ghost might exist in a field of energy or a signal being broadcast across space. While still in the material world, such entities would be bodiless in the everyday meaning of the term, and this might suffice to make them ghosts.

A second and far less common form of monism (the view that there is but one type of metaphysical stuff) is known as idealism or phenomenalism. This is not because the people who believe it are idealistic or phenomenal. Rather, this is the view that all that exists is mental in nature. George Berkeley (best known as the “if a tree falls in the forest…” guy) held to this view. As he saw it, reality is composed of minds (with God being the supreme mind) and what we think of as bodies are just ideas in the mind.

Phenomenalism would seem to preclude the existence of ghosts—minds never have bodies and hence can never become ghosts. However, the idealists usually provide some account for the intuitive belief that there are bodies. Berkeley, for example, claims that the body is a set of ideas. As such, the death of the body would be a matter of having death ideas about the ideas of the body (or however that would work). Since the mind normally exists without a material body, it could easily keep on doing so. And since the “material objects” are ideas, they could be interacted with by idea ghosts. So, it all works out with phenomenal ghosts.

While the classic werewolf is a human with the ability to shift into the shape of a wolf, movie versions often transform into a wolf-human hybrid. The standard werewolf has a taste for human flesh, a vulnerability to silver and a serious shedding problem. Some werewolves have impressive basketball skills, but that is not a standard werewolf ability.

There have been various efforts to explain the werewolf myths and legends. Some of the scientific attempts include forms of mental illness or disease. On these accounts, the werewolf does not actually transform into wolf-like creature but is an unfortunate person suffering from an affliction. These non-magical werewolves are possible but are more tragic than horrific.

There are also supernatural accounts for werewolves, and some involve vague references to curses. In many tales, the condition can be transmitted—perhaps by a bite or, in modern times, even by texting. These magical beasts are not possible unless this is, contrary to all evidence, a magical world.

There has even been speculation about future technology-based shifters—perhaps involving nanotechnology that can rapidly re-structure a living creature without killing it. But these would be werewolves of science fiction.

Interestingly enough, there could also be philosophical werewolves (which, to steal from Adventure Time, could be called “whywolves”) that have a solid metaphysical foundation. Well, as solid as metaphysics gets.

Our good dead friend Plato (who was probably not a werewolf) is known for his theory of Forms. According to Plato, the Forms are supposed to be eternal, perfect entities that exist outside of space and time. As such, they are even weirder than werewolves. However, they neither shed nor consume the flesh of humans, so they have some positive points relative to werewolves.

For Plato, all the particular entities in this imperfect realm are what they are in virtue of their instantiation of various Forms. This is sometimes called “participation”, perhaps to make the particulars sound like they have civic virtue. To illustrate this with an example, my husky Isis was a husky because she participated in the form of Husky. This is, no doubt, among the noblest and best of dog forms. Likewise, Isis was furry because she instantiated the form of Fur (and shared this instantiation with all things she contacted—such was the vastness of her generosity).

While there is some nice stuff here in the world, it is sadly evident that all the particulars lack perfection. For example, while Donald Trump’s buildings are clearly quality structures, they are not perfect buildings. Likewise, while he does have a somewhat orange color, he does not possess perfect Orange.

Plato’s account of the imperfection of particulars, like Donald Trump, involves the claim that particulars instantiate or participate in the Forms in varying degrees. When explaining this to my students, I usually use the example of photocopies of various quality. The original is analogous to the Form while the copies of varying quality are analogous to the particulars.  

Plato also asserts that particulars can instantiate or participate in “contrasting” Forms. He uses the example of how things here in the earthly realm have both Beauty and Ugliness, thus they lack perfect Beauty. To use a more specific example, even the most attractive supermodel still has flaws. As such, a person’s beauty (or ugliness) is a blend of Beauty and Ugliness. Since people can look more or less beautiful over time (time can be cruel), this mix can shift—the degree of participation or instantiation can change. This mixing and shifting of instantiation can be used to provide a Platonic account of werewolves (which is not the same as having a Platonic relation with a werewolf).

If the huge assumptions are made that a particular is what it is because it instantiates various Forms and that the instantiations of Forms can be mixed or blended in a particular, then werewolves can easily be given a metaphysical explanation in the context of Forms.

For Plato, a werewolf would be a particular that instantiated the Form of Man but also the Form of Wolf. As such, the being would be part man and part wolf. When the person is participating most in the Form of Man, then they would appear (and act) human. However, when the Form of Wolf became dominant, their form and behavior would shift towards that of the wolf.

Plato mentions the Sun in the Allegory of the Cave as well as the light of the moon. And it seems appropriate that the moon (which reflects the light of the sun) is credited in many tales with triggering the transformation from human to wolf. Perhaps since, as Aristotle claimed, humans are rational animals, the direct light of the sun means that the human Form is dominant. The reflected light of the full moon would, at least in accord with something I just made up, result in a distortion of reason and thus allow the animal Form of Wolf to dominate. There can also be a nice connection here to Plato’s account of the three-part soul: when the Wolf is in charge, reason is mostly asleep.

While it is the wolf that usually takes the blame for the evil of the werewolf, it seems more plausible that this comes from the form of Man. After all, research shows wolves have been given a bad rap. So, whatever evil is in the werewolf comes from the human part. The howling, though, is all wolf.

As a gamer and horror fan I have an undying fondness for zombies. Years ago, I was intrigued by tales of philosophical zombies—I had momentary hope my fellow philosophers were doing something interesting. But, as is often the case, professional philosophers sucked the life out of the already lifeless. Unlike proper flesh devouring creations of necromancy or mad science, philosophical zombies are dull creatures.

Philosophical zombies look and act like normal humans but lack consciousness. They are no more inclined to seek the brains of humans than standard humans. Rather than causing the horror proper to zombies, philosophical zombies bring about a feeling of vague disappointment. This is the same sort of disappointment that readers in my age range might recall from childhood trick or treating when someone gave you pennies or an apple rather than candy.

Rather than serving as minions for necromancers or metaphors for vacuous and excessive American consumerism, philosophical zombies serve as victims in philosophical discussions about the mind and consciousness.

The dullness of current philosophical zombies does raise an important question—is it possible to have a philosophical discussion about proper zombies? There is also a second and equally important question—is it possible to have an interesting philosophical discussion about proper zombies? As I will show, the answers are “yes” and “obviously not.”

Since there is, at least in this world, no Bureau of Zombie Standards, there are many varieties of zombies. In my games and fiction, I generally define zombies in terms of beings that are biologically dead yet animated (or re-animated, to be more accurate). Traditionally, zombies are “mindless” or possess a very basic awareness that suffices to move about and seek victims.

In works of fiction, many beings called “zombies” do not have these qualities. The zombies in 28 Days are “mindless” but are still alive. As such, they are not really zombies—just infected people. The zombies in Return of the Living Dead are dead and re-animated but retain human intelligence. Zombie lords and juju zombies in D&D and Pathfinder are dead and re-animated but are also intelligent. In the real world, there are also what some call zombies. These are organisms taken over and controlled by another organism, such as an ant controlled by a nasty fungus. To keep the discussion focused and narrow, I will stick with what I consider proper zombies: biologically dead, yet animated. While I generally take zombies to be unintelligent, I do not consider that a definitive trait. For folks concerned about how zombies differ from other animated dead, such as vampires and ghouls, the main difference is that stock zombies lack the special powers of more luxurious undead—they only have the same basic capabilities as the living creature (mostly moving around, grabbing and biting). In contrast, vampires are usually portrayed as super-powered undead.

One key issue about zombies is whether they are possible. There are various ways to “cheat” to create zombies—for example, a mechanized skeleton could be embedded in dead flesh to move about. This would make a rather impressive horror weapon. Another option is to have a corpse driven about by another organism—wearing the body as a “meat suit.” However, these would not be proper zombies since they are not self-propelling—just being moved about by something else.

In terms of “scientific” zombies, the usual approaches include strange chemicals, viruses, funguses or other such means of animation. Since it is well-established that electrical shocks can cause dead organisms to move, getting a proper zombie would seem to be an engineering challenge—although making one work properly could require “cheating” (for example, having computerized control nodes in the body that coordinate the manipulation of the dead flesh).

A traditional means of animating corpses is via supernatural means. In games like Pathfinder, D&D and Call of Cthulhu, zombies are animated by spells (the classic being animate dead) or by an evil spirit occupying the flesh. In the D&D tradition, zombies (and all undead) are powered by negative energy (while living creatures are powered by positive energy). It is this energy that enables the dead flesh to move about (and violate the usual laws of biology).

While the idea of negative energy is mostly a matter of fantasy games, the notion of unintelligent animating forces is not unprecedented in the history of science and philosophy. For example, Aristotle seems to have considered that the soul (or perhaps a “part” of it) served to animate the body. Past thinkers also considered forces that would animate non-living bodies. As such, it is easy enough to imagine a similar sort of force that could animate a dead body (rather than returning it to life).

The magic “explanation” is the easiest approach but is not really an explanation. It seems reasonable to think that magic zombies are not possible in the actual world—though all the zombie stories and movies show it is easy to imagine possible worlds inhabited by them.

The idea of a truly dead body moving around in the real world the way fictional zombies seems implausible. After all, it seems essential to biological creatures that they be alive (to some degree) for them to move about under their own power. What would be needed is some sort of force or energy that could move truly dead tissue. While this is conceivable (in the sense that it is easy to imagine), it does not seem possible—at least in this world. Dualists might, of course, be tempted to consider that the immaterial mind could drive the dead shell—after all, this would only be marginally more mysterious than the ghost driving around a living machine. Physicalists, of course, would almost certainly balk at proper zombies—at least until the zombie apocalypse. Then they would be running.

There are many self-help books, but they all suffer from one fatal flaw: they assume the solution to your problems lies in changing yourself. This is a clearly misguided approach for many reasons.

The first is the most obvious. As Aristotle’s principle of identity states, A=A. Or, put in words, “each thing is the same with itself and different from another.” As such, changing yourself is impossible: to change yourself, you would cease to be you. The new person might be better. And, let’s face it, probably would be. But it would not be you. As such, changing yourself would be ontological suicide and you do not want any part of that.

The second is less obvious but is totally historical. Parmenides of Elea, a very dead ancient Greek philosopher, showed that change is impossible. I know that “Parmenides” sounds like cheese, perhaps one that would be good on spaghetti. But, trust me, he was a philosopher and would make a poor pasta topping. Best of all, he laid out his view in poetic form, the most truthful of truth conveying word wording:

 

How could what is perish? How could it have come to be? For if it came into being, it is not; nor is it if ever it is going to be. Thus coming into being is extinguished, and destruction unknown.

 

Nor was [it] once, nor will [it] be, since [it] is, now, all together, / One, continuous; for what coming-to-be of it will you seek? / In what way, whence, did [it] grow? Neither from what-is-not shall I allow / You to say or think; for it is not to be said or thought / That [it] is not. And what need could have impelled it to grow / Later or sooner, if it began from nothing? Thus [it] must either be completely or not at all.

 

[What exists] is now, all at once, one and continuous… Nor is it divisible, since it is all alike; nor is there any more or less of it in one place which might prevent it from holding together, but all is full of what is.

 

And it is all one to me / Where I am to begin; for I shall return there again.

 

That, I think we can all agree, is completely obvious and utterly decisive. Since you cannot change, you cannot self-help yourself by changing. That is just good logic. I would say more, but I do not get paid by the word to write this stuff. I do not get paid at all.

But, obviously enough, you want to help yourself to a better life. Since you cannot change and it should be assumed with 100% confidence that you are not the problem, an alternative explanation for your woes is needed. Fortunately, the problem is obvious: other people. The solution is equally obvious: you need to get new people. Confucius said, “Refuse the friendship of all who are not like you.” This was close to the solution, but if you are annoying or a jerk, being friends with annoying jerks is not going to help you. A better solution is to tweak Confucius just a bit: “Refuse the friendship of all who do not like you.” This is a good start, but more is needed. After all, it is obvious that you should just be around people who like you. But that will not be totally validating.

The goal is, of course, to achieve a Total Validation Experience (TVE). A TVE is an experience that fully affirms and validates whatever you feel needs to be validated at the time. It might be your opinion about Mexicans or your belief that your beauty rivals that of Adonis and Helen. Or it might be that your character build in World of Warcraft is fully and truly optimized.

By following this simple dictate “Refuse the friendship of all who do not totally validate you”, you will achieve the goal that you will never achieve with any self-help book: a vast ego, a completely unshakeable belief that you are right about everything, and all that is good in life. You will never be challenged and never feel doubt. It will truly be the best of all possible worlds. So, get to work on surrounding yourself with Validators.  What could go wrong? Nothing. Nothing at all.

As this is being written, people are fleeing wars, crime and economic woes around the world. As with past exoduses, some greet the refugees with kindness, some with indifference and some with hate. As a philosopher, my main concern is with the ethics of obligations to refugees.

One approach is to apply the golden rule—to do unto others as we would have them do unto us. While most readers are probably living lives of relatively good fortune, it is easy to imagine one’s life falling apart due to war or other disaster, human made or natural. In such circumstances, a person would almost certainly want help. As such, if the golden rule has moral validity, then help should be rendered to refugees.

One objection is that people should solve their own problems. In the case of nation at war, it could be contended that people should stay and fight. Or, at the very least, they should not expect others to do their work for them. In the case of those trying to find a better life elsewhere, it could be argued that they should remain in their home countries and build a viable economy. These are, of course, variations of “pull yourself up by your own bootstraps.”

One could also advance a house analogy. Imagine, if you will, that the neighbors down the road are fighting among themselves and wrecking their house. Some of them, tired of the conflict, show up at your door and insist that you put them up and feed them. Though it might be kind of you to help them, it could also be said that they should put their own house in order. After all, you have managed to keep your house from falling into chaos and they should be able to do the same. There is also the concern that they will wreck your house as well.

This analogy assumes that the fighting and wrecking began in the house down the road and no outsider contributed to starting the trouble If, for example, people were put arbitrarily into houses and subject to relentless outside interference, then the inhabitants would not bear full responsibility for their woes—so the problems they would need to solve would not be entirely their own. This would seem to provide a foundation for an obligation to help them, at least on the part of those who helped cause the trouble they face.

If, as another example, the house was invaded from the outside, then that would also change matters. In this case, the people fleeing the house would be trying to escape criminals and it would certainly be a wicked thing to slam the door in the face of victims of crime.

As a final example, if the head of the household was subjecting the weaker members of the household to domestic abuse, then it would also change the situation in relevant ways. If abused people showed up at one’s door, it would be heartless to send them back to be abused.

Interestingly, the house analogy can also be repurposed into a self-interest argument for taking in refugees. Imagine, if you will, a house of many rooms that were once full of people. Though the house is still inhabited, there are far fewer people and many of them are old and in need of care. There is much that needs to be done in the house, but not enough people to do it all.

Nearby are houses torn with violence and domestic abuse, with people fleeing from them. Many of these people are young and many are skilled in doing what needs to be done in the house of many rooms. As such, rational self-interest provides an excellent reason to open the doors and take in those fleeing. The young immigrants can assist in taking care of the native elderly and the skilled can fill empty jobs. In this case, acting in self-interest would coincide with doing the right thing.

There are, of course, at least two counters to this self-interest analogy. One is the moral problem of taking in people out of self-interest while letting the other houses fall into ruin. This does suggest that a better approach would be to try to bring peace to those houses. However, if peace is unlikely, then taking in those fleeing would seem to be morally acceptable.

Another is a practical concern—that some of those invited in will bring ruin and harm to their new house. While this fear is played up, the danger presented by refugees seems to be rather low—after all, they are refugees and not an invading army. That said, it is reasonable to consider the impact of refugees and to take due care in screening for criminals.

https://commons.wikimedia.org/wiki/File:Zohran_Mamdani_05.25.25_(b)_(cropped).jpg

Back in 2015 Republican then presidential candidate Ben Carson took some heat for his remarks about Muslims. Donald Trump has helped feed the persistent unfounded suspicions that President Obama is a secret Muslim. Some of the fine folks at Fox and other conservative pundits have an established history of what some critics regard as anti-Muslim bigotry.

As might be suspected, those accused of bigotry usually respond by claiming they are not bigots and assert they are telling the truth about Islam. There are claims that nearly all Muslims wish to impose Sharia law on America, that Islam (unlike any other faith) cannot become a part of American society, and that taqiyya allows Muslims a license to lie to achieve their goals. The assertion about taqiyya is especially useful to critics—any attempt by Muslims to refute accusations can be dismissed as falling under taqiyya.

It is not always clear if the bigotry expressed against Muslims is “sincere” bigotry or bad faith opportunism. While “honest” bigotry is bad enough, feeding the fires of hatred for gain is perhaps even worse. This sort of bigotry in politics is, obviously, nothing new.

Though I am not a Mormon, in 2011 I wrote a defense of Mitt Romney and Mormonism against accusations that Mormonism is a cult. I have also defended the claim that Mormonism is a form of Christianity. While the religious bigotry against Romney was not widespread, it was present and is like the bigotry against Muslims.

 Another example of bigotry against a religion in America is the anti-Catholicism that was rampant before Kennedy became President. Interestingly, past accusations against American Catholics are mirrored in the current accusations against American Muslims—that a Catholic politician would be controlled by an outside religious power, that a Catholic politician would impose his religious rules on America and so on. As is now evident, these accusations proved baseless and Catholics are accepted as fit for holding public office. In fact, Catholics commonly hold offices. Given the accusations against Catholicism turned out to be untrue, it seems reasonable to consider that the same accusations against Islam are also untrue.

The bigotry against Muslims has also been compared to the mass internment of Japanese Americans during WWII.  In the case of Japanese Americans, the fear was that they would serve as spies and saboteurs for Japan, despite being American citizens. The reality was, of course, that Japanese Americans served America just as loyally as German Americans and Italian Americans.

While it is possible that Islam is the one religion that cannot become part of American society, history shows that claims that seem to be bigotry generally turn out to be just that. As such, it is reasonable to regard these broad accusations against American Muslims as unfounded bigotry.

Each of us has a hill that is life. I can see the hills of other people. Some are still populated, some still bear the footprints of a recently departed runner, and many are cold with long abandonment. While I can see these other hills, I can only run on my own and no one else can run mine. That is how it is, poetry and movies notwithstanding.  In truth, we all run alone.

I am (in fact and metaphor) a distance runner. Running the marathon and greater distances gave me a sneak peak at old age. I finished my first marathon at the age of 22, at the peak of my strength, crossing the line in 2:45. Having consulted with old feet at marathons, I knew that the miles would beat me like a piñata—only instead of candy, I would be full of pain. I hobbled along slowly for the next few days—barely able to run. But, being young, I was soon back up to speed, forgetting that brief taste of the cruelty of time. But time never forgets us. Although I know that time will eventually run out of me.

We runners often obsess about numbers. We record our race times, our training distances and much more. While everyone is aware that the march of time eventually becomes a slide downhill, runners are forced to face the objective quantification of their decline. Though I started running in high school, I did not become a runner until after my first year as a college athlete in 1985 and I only started recording my run data back in 1987. I, with complete faith in my young brain, was sure I would remember my times forever.

My first victory in a 5K was in 1985—I ran an 18:20. My time improved considerably: I broke 18, then 17 and (if my memory is not a false one) even 16. Then, as must happen, I reached the peak of my running hill and the decline began. I struggled to stay under 17, fought to stay under 18, battled to stay below 19, and then warred to remain below 20. The realization of the damage done by time sunk home when my 5K race pace was the same as the pace for my first marathon. Once, I sailed through 26.2 miles at about a 6:20 per mile pace. Now I cannot do that for even a mile. Another marker was when my 5-mile race time finally became slower than my 10K race time (33 minutes). Damn the numbers.

Each past summer, I have returned to my hometown and run the routes of my youth. Back in the day, I would run 16 miles at a 7 minute per mile pace. Now I shuffle along. But dragging all those years will slow a man down. When I ran those old routes, I sped up when I hit the coolness of the pine forest—the years momentarily dropped away and I felt like a young man again. But, like the deerflies that haunt the woods, the years soon caught up and bit me. Unlike the deerflies, I cannot just swat them down. Rather, they are swatting me down and, like many a deerfly, I will eventually be crushed and broken by something greater than I. Someday, I will go out for a run and never come back, leaving my hill cooling in the frost of time. But until that day, the run goes on. And on.