When the culture war opened a gaming front, I began to see racist posts in gaming groups on Facebook and other social media. Seeing these posts, I wondered whether they are made by gamers who are racists, racists who game or merely trolls (internet, not D&D).

Gamers who are racists are actual gamers that are also racists. Racists who play games (or pretend to play them) are doing so as a means to recruit others into racism. While right-wing hate groups recruit video gamers, there seems to be no significant research into recruitment through tabletop games like D&D. My discussion does not require any racists who game; all that is needed is gamers who are racist. Unfortunately, you can easily find them on social media.

An easy way to summon racists is to begin a discussion of diversity in gaming or mention of the revised 2024 rules. But surely there are non-racists who disagree with diversity in gaming and the changes WotC has made in the 2024 rules? Is it not hyperbole and a straw man to cast all critics of diversity as racists? This is a fair and excellent point: to assume every critic of diversity and the game changes is a racist would be bad reasoning. But while some racists are openly racist, others use stealth. They advance arguments that seem reasonable and non-racist while occasionally letting a hint of racism show through. But not so much racism that it cannot be plausibly denied.

There is also another problem: the honest non-racist critic and a stealthy racist will often advance the same arguments. So, what is the difference, other than the racism? The answer is that the critic is arguing in good faith while the racist is arguing in bad faith.

As a philosopher, I will start with the philosophical definition of an argument. In philosophy, an argument is a set of claims, one of which is supposed to be supported by the others. There are two types of claims in an argument. The first type of claim is the conclusion.  This is the claim that is supposed to be supported by the premises. A single argument has one and only one conclusion, although the conclusion of one argument can be used as a premise in another argument.

The second type of claim is the premise. A premise is a claim given as evidence or a reason to logically accept the conclusion. Aside from practical concerns, there is no limit to the number of premises in an argument. When assessing any argument there are two factors to consider: the quality of the premises and the quality of the reasoning. The objective of philosophical argumentation is to make a good argument with true (or at least plausible) premises. Roughly put, the goal is to reach the truth.

Philosophical argumentation is different from persuasion as the goal of persuasion is to get the audience to believe a claim whether it is true or false. As Aristotle noted, philosophical argumentation is weak as persuasion. Empty rhetoric and fallacies (errors in reasoning) have greater psychological force (though they lack all logical force). The stage is thus set to talk about bad faith.

The foundation of arguing in good faith is the acceptance of the philosophical definition of argument: the goal is to provide plausible premises and good reasoning to reach the truth. This entails that the person must avoid intentionally committing fallacies, knowingly making false claims, and misusing rhetoric. A person can, of course, still employ persuasive techniques. Good faith argumentation does not require debating like a stereotypical robot or being dull as dust. But good faith argumentation precludes knowingly substituting rhetoric for reasons. A person can, in good faith, argue badly and even unintentionally commit fallacies because a person can make bad arguments in good faith. A person can, obviously, also make untrue claims when arguing in good faith. But as long as these are errors  rather than lies and the person put in effort to check their claims, then they can still be arguing in good faith. 

Arguing in good faith also requires that the person be honest about whether they believe their claims and whether they believe their reasoning is good. A person need not believe what they are arguing for, since a person can advance an argument, they disagree with as part of a good faith discussion. For example, I routinely present arguments that oppose my own views when I am doing philosophy.

One must also be honest about one’s goals when arguing from good faith. To illustrate, a critic of changes to D&D who is open about their belief that they are detrimental to D&D would be acting in good faith. A racist who argues against changes in D&D hoping to lure people into racism while concealing their motives would be arguing in bad faith. As would be suspected, a clever racist will conceal their true motives when trying to radicalize the normies. There is also the possibility that a person is trolling. But if someone is trolling with racism it does not matter that they are a troll for they are still doing the racist’s work for them.

While there are objective methods for sorting out the quality of arguments and the truth of claims, determining motives and thoughts can be hard. As such, while I can easily tell when someone is committing an ad hominem fallacy, I cannot always tell when someone is engaged in bad faith argumentation. This is more in the field of psychology than philosophy as it involves discerning motives and intentions. However, sorting out motives and intents is something we all do, and we can divine from a person’s actions and words what their motives and intents might be. But we should use caution before accusing someone of arguing in bad faith and this accusation certainly should not be used as a bad faith tactic. To use accusations of bad faith as a rhetorical device or an ad hominem would be bad faith argumentation and would, of course, prove nothing. But why should people argue in good faith?

There are two broad reasons why people should do so. The first is ethical: arguing in good faith is being honest and arguing in bad faith is deceitful.  Obviously, one could counter this by arguing against honesty and in favor of deceit. The second is grounded in critical thinking: bad faith argumentation generally involves bad logic, untruths, and a lack of clarity. As such, arguing in good faith is ethical and rational. Bad faith argumentation is the opposite. Why, then, do people argue in bad faith?

One reason is that bad faith reasoning can work well as persuasion. If one rejects truth as the goal and instead focuses on winning, then bad faith argumentation would be the “better” choice. 

A second reason is that a person might risk harm, such as social backlash, for arguing their views in good faith. In such cases, hiding their views would be prudent. As a good example, a person who wants to get people to accept human rights in a dictatorship might argue in bad faith, hoping to “trojan horse” people into accepting their views. If they openly argued for human rights, they risk being imprisoned or killed. As an evil example, a racist might argue in bad faith, hoping to “trojan horse” people into accepting their views. If they were openly racist in a D&D Facebook group, they would face censure and might be kicked out of the group. So arguing acting in bad faith is the only way they will be able to poison the group from the inside. A third reason is that bad faith reasoning can lure people down a path they would not follow if it were honestly labeled. Such a use does raise moral questions; some might advance a utilitarian argument to defend its use for good while others might condemn such deceit even if it is alleged it is to achieve a good end.

In the next essay I will look at some arguments against some of WotC’s policies that can be made in good or bad faith

A few years ago the owners of D&D, Wizards of the Coast, issued an article on diversity. In the previous essay, I advanced two arguments in defense of some of what Wizards proposed. One is the utilitarian argument stolen from Plato that harmful aspects of art can harm a person’s character and could increase their chances of behaving badly in the real world. The second is a Kantian style argument that it does not matter whether immoral content causes harm, what matters is that the content is immoral. I ended the essay noting an obvious concern with my argument: the same reasoning would seem to apply to two core aspects of D&D: killing and looting.

As an aside, I lived through the Satanic Panic D&D faced in the 1980s. The argument against D&D was like Plato’s argument but with a Christian modification that D&D would lead people to Satanism and other cults. Like most other moral panics from the right, this was debunked long ago. Now back to killing and theft.

Using Plato’s argument as a template, it is easy to argue that violence and looting should be removed from D&D: engaging in fictional violence and theft could corrupt people and make them more likely to behave badly in real life. I can also reuse the Kantian argument: even if hacking up dragons and looting their hoards had no impact on people, allowing the immoral content of killing and stealing would be immoral. This would allow for an argument from analogy: if D&D should be cleansed of racist elements for moral reasons, then it should also be cleansed of violence and theft on moral grounds. There are two main options in terms of where this reasoning should take us.

The first is to accept the analogy and agree D&D should also be cleansed of violence and theft. This would radically change the game, although some people have run violence-free campaigns. The second is to take this analogy as a reductio ad absurdum of the original argument. If using the same logic (what is known as parity of reasoning) leads to an absurd conclusion, then this can be taken as refuting arguments with the same logic. A well-known example of this is philosophy is Gaunilo’s reply to St. Anselm’s ontological argument.

Since D&D is inherently a game of combat and looting, it would be absurd to remove these elements. This would be analogous to removing cars from NASCAR. Since the violence argument is reduced to absurdity, the diversity argument is absurd as well. D&D should remain unchanged: killing, looting and no diversity changes. While this line of reasoning is appealing, it can be challenged.

For this reasoning to be good,  fictional violence and theft must be analogous to fictional racism within the game. Interestingly, someone agreeing with this reasoning would need to agree that racism, killing and looting are all bad but they should not be removed from the game. Someone who thinks that racism, killing, and looting are all morally fine would not need to make the absurdity argument. They could just argue there is no moral reason to remove any of these from the game. So, can a person believe that killing, stealing and racism are bad while consistently supporting diversity on moral grounds while also allowing in game killing and looting? The answer is “yes” and supporting this requires arguing that the analogy between killing and racism breaks down.

The obvious way to do this is to point out a relevant difference between racism and killing: while racism seems to always be wrong, there are arguments that support morally acceptable violence. These include such things as Locke’s moral argument for self-defense and centuries of work in just war theory. In contrast, there seem to be no good forms of racism or cases in which racism is morally defensible. While someone might use violence for self-defense against a wrongful attack and be morally justified, there seem to be no cases of racism in self-defense: that one must use morally acceptable racism to protect oneself against wrongful racism. Likewise, there is no body of ethics that constitutes just racism theory. To be fair to the racists, they could argue in favor of the ethics of racism and I certainly invite good faith efforts to publicly make such a case.

Because there are moral distinctions in violence, D&D could include ethical violence with no moral problem. It would not be corrupting, nor would it be inherently evil. In D&D people typically play heroes doing heroic deeds such as fighting evil foes and looting their foes to continue their heroic efforts. There are, however, three obvious counters to this argument.

One is that there are arguments that violence is always wrong, and one could be a moral absolutist about violence. If violence is always wrong, then it would be wrong to include it in D&D. While not without its problems, pacifism is a coherent moral view and would certainly make D&D morally problematic if it were correct.

The second is that people play non-good and even evil characters in D&D who engage in evil acts of violence. I have played evil characters myself, my favorite being my delusional anti-paladin D’ko.  One could argue that playing evil PCs would be immoral. The obvious reply is that if one is playing the role and it is not impacting the person, then there would be no moral problem: no one is being harmed, and the evil deeds are fictional. If someone were to get into the role too much and engage in behavior that did hurt other people then that would be wrong as real harm would be done. This could even be harm done at the table. For example, a player who has their character rape defeated foes and graphically describes this to the other players could be doing real harm.  Also, a Kantian might disagree about the distinction between fictional and real evil and argue that to will evil even in play would still be evil.

The third is that even in games where all the PCs are good (or at least not evil), the DM must take on the role of any evil NPCs the players interact with and engage in fictional acts of evil. As such, it would seem hard to avoid including unjust violence in D&D. From a utilitarian perspective, this would be morally acceptable if the fictional violence did no harm, either in terms of corrupting people or inflicting suffering on those involved. Again, a Kantian approach might forbid even harmlessly playing an evil being as a DM but some Kantians are notorious as killjoys.

As my closing argument, I contend there is a meaningful distinction between playing an evil character doing evil acts of fictional violence and having the game content mirror the racism of the real world. To use an obvious analogy, this is the distinction between an actor playing the role of a racist in a movie and knowingly acting in a movie that serves as racist propaganda.  As such, D&D can retain violence, and players can play evil characters (within limits) while avoiding moral harms. But the racism should certainly go.

A few years ago, Wizards of the Coast(WotC), who own Dungeons & Dragons, issued a statement on diversity. As would be expected, the responses split along ideological lines and the culture war continues to this day. The D&D front of the culture war is personal for me. I started playing D&D in 1979 and have been a professional gaming writer since 1989. This ties me into the gaming aspect of the war. I am also a philosophy professor, so this ties me into the moral and aesthetic aspects of this fight.

The statement made by WotC has three main points. The first addresses race in the real world. The second addresses the portrayal of fictional races, such as orcs and drow, within the game. The third addresses racism from the real world within the game, with the example of how a Romani-like people were portrayed in the Curse of Strahd. In this essay I will focus on the in-game issues.

Before getting to the in-game issues, I will pre-empt some of the fallacious arguments. While it is tempting to use straw man attacks and hyperbole in this war, WotC cannot prevent gamers from doing as they wish in their own games. If you want your orcs to be evil, vegans, mathematicians or purple, you can and there is nothing WotC or Hasbro can do. Any change of WotC policy towards D&D races (or species) only applies to WotC. As such, the only censorship issue applicable here is self-censorship.

As always in the culture war, there were (and are) ad hominem attacks on folks at WotC. Most of these attribute “wicked” motives to them and take these alleged motivations as relevant to the correctness of their claims. In some cases, the criticism is that WotC is engaged in “woke marketing” to sell more products. While this can be evaluated as a business strategy, it proves nothing about the correctness of their position. In other cases, those at WotC have been accused of being liberals who are making things soft and safe for the dainty liberal snowflakes. This is also just an ad hominem and proves nothing. One must engage with the actual claims rather than flail away with insults.

To be fair, one can raise legitimate questions about the ethics of the folks at WotC: their motives do matter when assessing them as people. If this is merely cynical snowflake marketing, then they could be criticized as hypocrites. But their motives are still irrelevant to the assessment of their position and plans. It is to this that I now turn.

While the Monster Manual from AD&D does allow for monsters to differ in alignment from their standard entries in the book, many fictional races in the game have long been presented as “monstrous and evil.” These famously include orcs and the drow (a type of elf). The concern expressed by WotC is that the descriptions of these fictional races mirror the way racism manifests in the real world. Their proposed fix was to portray “all the peoples of D&D in relatable ways and making it clear that they are as free as humans to decide who they are and what they do.” In the case of real-world racism manifesting in their products, such as the depiction of a fictional version of the Romani, they plan to rewrite some older content and ensure that future products are free of this sort of thing. These changes raise both moral and aesthetic concerns.

One way to defend the traditional portrayal of fictional races in D&D is to, obviously enough, appeal to tradition. Since Tolkien, orcs have been portrayed as evil. Since the G and D series of modules,  D&D drow have been evil. The obvious problem with this defense is that it the appeal to tradition is a fallacy, one I have addressed at length in other essays.

Another way to defend the idea that some fictional races are inherently evil (or at least almost always evil) is to use in-game metaphysics. Until recently, good and evil were objective aspects of the standard D&D world. Spells could detect good and evil, holy and unholy weapons inflicted damage upon creatures of opposing alignments, and certain magic impacted creatures based on their alignment. Demons and devils are, by their nature, evil in classic D&D. Angels and other celestials are, by nature, good in classic D&D. While alignment does have some role in D&D 5E, this role is miniscule by way of comparison.

In most D&D worlds, gods of good and evil exist and certain races were created by such gods. For example, the elves have mostly good deities, with the most obvious exception being the goddess Lolth, the queen of the demonweb pits. As such, the notion of races that are predominantly evil or good makes sense in such game worlds. As good and evil are metaphysically real, creatures could be imbued by divine and infernal powers with alignments.

While this defense does have its appeal, it raises an obvious concern: in the real-world people defend real racism with appeals to good and evil. They invoke creation stories to “prove” that certain people are better and others inferior. As the folks at WotC note, fantasy worlds often mirror the racism of the real world.

One reply to such concerns is to point out that most people can distinguish between the fictional world of D&D and the real world. Casting orcs and drow as evil and monstrous, even using language analogous to that used by racists in the real world, is nothing to be concerned about because people know the difference. The player who curses the “foul green skins” in game will not thus become a racist in the real world and curse the “wicked whites.” Thus, one might conclude, WotC stands refuted. There is, however, an ancient philosophical counter to this reply.

In the Republic Plato presents an argument for censorship based on the claim that art appeals to emotions and encourages people to give in to these emotions.  Giving way to these emotions is undesirable because it can lead to shameful or even dangerous behavior. On his view, viewing tragic plays might lead a person to give in to self-pity and behave poorly. Exposure to violent art might cause a person to yield more readily to the desire to commit violence. While Plato does not talk about racism (because the ancients had no such concept), his argument would apply here as well: engaging in fictional racism can lead people to racism in the real world. As such, Plato would presumably praise WotC for this action.

At this point it is reasonable to bring up the obvious analogy to video games. While the power of video games to influence ethics would seem to be an empirical matter, the current research is inconclusive because the “…evidence is all over the place” —so it currently comes down to a matter of battling intuitions regarding their power to influence. So, I will turn to Plato’s most famous student.

As Aristotle might say, players become habituated by their play.  This includes not just the skills of play but also the moral aspects of what is experienced in play. This, no doubt, is weaker than the influence of the habituation afforded by the real world. But to say that D&D games with moral components have no habituating influence is analogous to saying that video games with hand-eye coordination components have no habituating impact on hand-eye coordination beyond video games. One would have to assert players learn nothing from their hours of play, which seems unlikely.

I am not claiming that D&D takes control of the players in a Mazes and Monsters scenario, just that experiences shape how we perceive and act, something that is obviously true. So, I do not think that people who play in D&D games casting orcs and drow as monstrous and even those that mirror real world racism would make players into white supremacists. Rather, I agree with the obvious claim: our experiences influence us and getting comfortable with fictional racism makes it slightly easier to get comfortable with real world racism.

For those who prefer Kant, one could also advance a Kantian style argument: it does not matter whether the in-game racism that mirrors real world racism has an impact on people’s actions or not, what matters is whether such racism is wrong or right in and of itself. If racism is wrong, then even fictional racism would thus be wrong.

As someone who regularly games, I can see the obvious danger in the arguments I have just advanced: would not the same arguments apply to a core aspect of D&D, namely the use of violence? I will address these matters in the next essay.

The appeal to tradition assumes a key part of what makes a belief or practice true or correct is its age; that is, it is old enough to be a tradition. If defenders of tradition accepted as correct the oldest beliefs and practices they could find, there would be no need to sort out which traditions to accept beyond determining which are the oldest.

But those making the appeal rarely use it to defend ancient beliefs or practices. For example, while American defenders of “traditional” gender roles often hearken back to their perception of a past, they do not  draw their traditions of sexual roles from ancient Greece. This is not surprising, though these are ancient, they are not consistent with the values presented as traditional by modern American conservatives. Since defenders of tradition do not follow the “oldest is best” principle, they need another standard for selecting their traditions, some principle other than time.

This leads to an obvious dilemma: if time is the determining factor for what is best, then they would need to embrace the oldest practices and beliefs they can find. If there are other factors, then there would be no need to appeal to tradition. They could just use these other factors to defend their beliefs and practices. The first option is absurd; the second makes referring to tradition pointless, except as a fallacy or rhetorical device. While time is a problem for the defenders of tradition, there are analogous problems.

While time is obviously a factor in traditions, one also must consider geography. For example, Christian Americans who appeal to tradition when defending religious values do not embrace the traditions of China, Persia (now Iran), or India. They focus on the United States and Europe. Not only that, but they must also focus on specific groups within those geographic locations. After all, there are diverse traditions within even one American state, city or town or family. Those making an appeal to tradition would need to make a principled argument, give reasons as to why the traditions should (in addition to being from a specific time) also come from a specific location and a specific group. As has been argued, if reasons can be advanced, then there is no need to appeal to tradition. One could just rely on those reasons.

One interesting approach is to embrace a form of relativistic traditionalism: the traditions are best that are the traditions of my culture. This, obviously enough, would include a form of moral relativism: what is morally right or wrong is relative to the culture. Equally obviously it would run into the usual problems for moral relativism. One is that relativism entails that a person has no logical reason to accept the values of their culture as correct. That my culture accepts X gives me only practical reasons to follow X, such as avoiding being harmed or seeking praise. After all, if I believe that relativism is true, I also believe that all cultural values are equally good (or bad).

Another is that relativism ends up collapsing into subjectivism: I can form a culture of one and thus make my values the correct values and so can everyone else. And I can change my values.  This clearly collapses into nihilism, there would be no values in any meaningful sense. In the case of relativism about beliefs, there would be the usual problems for embracing relativism about truth, ones addressed at length by other philosophers, such as Plato. As such, the relativism approach would end in failure.

If an appeal is made to objective truth and objective values, then invoking tradition would be an error: there are many traditions that are inconsistent and even contradictory and so they cannot all be correct. What would be needed would be arguments to show which beliefs and practices among traditions are correct (if any).  Why, then, do people use the appeal to tradition?

One obvious reason is that it can work. Fallacies often have far more persuasive power than good logic. Invoking tradition can also serves as a rhetorical device: people often like traditions and calling something traditional can give that thing a positive feeling, though it would obviously be unearned unless reasons were given as to why it is true or good. Traditions can also often be comforting and give people a sense of stability and security, thus calling something traditional can invoke those feelings. But, once again, as a mere rhetorical device this is unearned.

In closing, I do not think that something must be bad because it is traditional. I have traditions I like precisely because they are pleasant and comforting, but I know that appealing to tradition proves nothing.

The previous essay discussed the family of fallacies that include the appeal to tradition. In this essay I will discuss the test of time and the origin problem. As noted in the previous essay, the gist of the appeal to tradition is that it involves fallaciously inferring that something is correct or true simply because it is a tradition. While concluding that something is correct or true merely because it has been done or believed a long time is an obvious error, those making an appeal to tradition often try to invoke the notion of the test of time. In some cases, the appeal to the test of time is implied while in others it is explicitly made. The appeal to the test of time can be presented as the following argument:

 

Premise 1: X has withstood the test of time.

Conclusion: X is true, right, or correct, etc.

 

While this might be a fallacy, this depends on what the test of time is taken to be. If the test of time is just that X has been believed or practiced for a long time, then this is the appeal to tradition fallacy all over again. False beliefs can persist for centuries as can awful practices, so mere historical longevity does not suffice as evidence of truth or goodness.

The test of time can be defined in terms of actual testing. In the case of a belief, it could be argued that the belief has been subject to repeated assessment for a long time using rigorous methods and thus has passed the test of time. While such testing over time would be good evidence for a belief, it still does not mean that the belief should be accepted as true because it is a tradition. Rather, it should be accepted as true because of the evidence found during the repeated testing. As such, if a belief has passed this sort of test of time, then there should be a significant body of evidence to back up the belief and there would be no reason to make a mere appeal to tradition. One could just make an argument using the evidence from these tests as premises.

There are numerous examples of beliefs that have been tested over time, such as the belief that the earth orbits the sun, that fire burns, that the appeal to tradition is a fallacy, and smoking damages your health. There can, of course, be meaningful debate over even well-supported beliefs.

The same approach can be taken for practices, if a practice has been rigorously assessed over time, then there should be a evidence supporting the correctness or goodness of the practice and thus there would be no reason to rely on a mere appeal to tradition. For example, the practice of good hygiene has been assessed over time and has been found useful. Well-supported practices are still subject to debate and practices that involve value judgments (such as in law, ethics and religion) are matters of great dispute.

Using the test of time approach creates a dilemma: if the test of time is just another expression for tradition, then it is just an appeal to tradition. If the test of time involves rigorous testing, then there is no need to appeal to tradition, there would be good evidence and arguments to use instead. One thing that those who use the test of time approach must admit is that this test must have had a starting point. That is, every belief or practice that is defended as traditional must have an origin and this leads to the origin problem.

The origin problem for tradition is that when the tradition was new it could obviously not be defended by appealing to tradition (or the test of time). The obvious question to ask about the origin of a now traditional belief or practice is “what made it better than the alternatives then?”

The answer to this question should still be applicable today, though it might need to be modified to account for changes over time. As such, a fair response to an appeal to tradition is to engage in some mental time travel and ask the appealer why anyone should have accepted the belief or practice before it became a tradition. For example, if someone is appealing to tradition to defend what they see as traditional values, it is reasonable to trace them back to their origin and inquire about what made them better at that time and why they are the best today. Obviously on day zero of a tradition there can be no appeal to tradition. If no reasons can be advanced as to why it was better then, merely saying it is a tradition now provides no reasons to support its alleged truth or correctness.

The appeal to tradition is a popular and traditional fallacy. During the last debate over same-sex marriage, this fallacy was one of the core “arguments” used by those defending “traditional” marriage. It is still commonly used to defend “traditional” gender roles and “traditional” religious values. The most obvious problem with this approach to argumentation is that it involves a fallacy, a bad argument in which the premise(s) fail to logically support the conclusion. As to why people would use a fallacy, some reasons include not realizing it is a fallacy, not having any good arguments to use, or knowing that a fallacy can be far more persuasive than a logically good argument.

Rather than engage in the endless task of addressing the multitude of specific fallacious appeals to tradition, I will focus on the fallacy itself in the hope of providing the tools needed to recognize and defend against such appeals. To begin and to provide a context for the appeal to tradition, we need to consider two similar fallacies, the appeal to belief and the appeal to common practice.

The appeal to belief fallacy occurs when a person infers a claim is true  because all or most people believe the claim. It has the following pattern:

 

Premise 1:  All (or most) people believe that claim X is true.

Conclusion: Therefore, X is true.

 

This line of “reasoning” is fallacious because the fact that many people believe a claim does not, in general, serve as evidence the claim is true.

There are some cases when belief does serve as evidence for a claim. One case is when the appeal is to the belief of experts. This sort of reasoning should take the form of an argument from authority, which I have discussed in an earlier essay. Sometimes, almost anyone can qualify as an expert. For example, suppose that while visiting my home state of Maine, several residents you see fishing tell you that people older than 16 need to buy a license to fish. Barring reasons to doubt these people, you have good reason to believe their claim because they most likely know the law and are probably not lying to you.

There are also cases where belief makes a claim true. Avoiding the fallacy in such cases does require including this as a premise.  For example, what counts as good manners depends on belief. The meaning of words also seems to rest on belief: words, in a practical sense, mean what most people believe they mean. Some philosophers argue that ethical and aesthetic claims fall into this category. Those who embrace moral relativism argue that what is good and bad is determined by the beliefs of a culture. Those who embrace aesthetic relativism contend that beauty is determined in the same way. But these theories cannot be simply assumed without committing the fallacy of begging the question. Now to the appeal to common practice.

While the appeal to belief involves what people believe, the appeal to common practice involves what people do. It occurs when someone concludes that an action is correct or right simply because it is (alleged to be) commonly done. It has the following form:

 

Premise 1: X is a common action.

Conclusion: Therefore, X is correct/moral/justified/reasonable, etc.

 

It is a fallacy because the mere fact that most people do something does not make it correct, moral, justified, or reasonable. As with appeal to belief, there are philosophers who argue there can be arguments from common practice that are not fallacious. For example, moral relativism is the theory that morality is relative to the practices of a culture. If what is moral is determined by what is commonly practiced, then a non-fallacious argument could be constructed using that as a premise. In this situation, what most people do would be right because that would be how morality worked.

People sometimes mistake an appeal for fair play for an appeal to common practice.  For example, a woman working in an office might say “the men who do the same amount and quality of work I do get paid more than I do, so it would be right for me to get paid the same as them.” The argument does not rest on the practice being a common one; rather it is an appeal to the principle of relevant difference. On this principle two people should only be treated differently if and only if there is a relevant difference between them. For example, it would be morally acceptable to pay people differently for work of different quality; but it would not be acceptable to pay people differently for the same quality and quantity of work simply because one is a male and the other female. As would be suspected, there is considerable debate about what differences are relevant. For example, sexists might believe that it is right for women to be paid less simply because they are women.  

You might wonder what the appeal to belief and the appeal to common practice have to do with the appeal to tradition. Roughly put, the appeal to tradition fallacy involves arguing that something is true or right because it has been believed or done for a long time (or both). As such, the fallacy occurs when it is assumed that something is better or correct simply because it is older, traditional, or “always has been done/believed.”

This sort of “reasoning” has the following form:

 

Premise 1: X is old or traditional (X has been believed or done a long time)

Conclusion: Therefore, X is correct (or better than the new or non-traditional).

 

This sort of “reasoning” is fallacious because the age of something does not automatically make it correct or better than something newer or non-traditional. This is shown by the following example: the theory that witches or demons cause disease is far older than the theory that microorganism cause diseases. If appeal to tradition was good reasoning, then the theory about witches and demons would be true.

While one should avoid falling for the appeal to tradition, it is equally important to avoid falling for the appeal to novelty.  This fallacy occurs when one infers that something is correct or better simply because it is new or non-traditional. This sort of “reasoning” has the following form:

 

Premise 1: X is new (or non-traditional).

Conclusion: Therefore, X is correct or better than the old/traditional.

 

This sort of “reasoning” is fallacious because the novelty or newness of something does not automatically make it correct or better than something older. To use a silly example, if someone just created the “earthworm diet” that involves eating only earthworms, it obviously does not follow that this is better than more traditional diets.  In general, the age or traditionality of something provides no evidence for or against its truth or goodness. In the next essay I will get into some deeper philosophical analysis of the appeal to tradition and why it is defective. 

During the COVID-19 Pandemic, Leon County in my adopted state of Florida mandated the wearing of face coverings in indoor, public spaces. There were numerous exceptions, such as while exercising (at a distance) and for health reasons. Those who violate the ordinance faced an initial $50 fine which increased to $125 and then up to $250. As would be expected, this ordinance was met with some resistance. Some even claimed that the mask mandate was tyranny.

While discussing the tyranny of the mask during COVID-19 has some historical value, there is also the general issue of whether such health focused mandates are tyrannical. After all, it is just a matter of time before the next pandemic and the state might impose mandates intended to keep people safe. Or it might not, depending on who is in charge.  

One challenge is agreeing on a serious definition of “tyranny” beyond “something I don’t like.” Since American political philosophy is based heavily on John Locke, he is my go-to for defining the term.

Locke takes tyranny to be the “exercise of power beyond right.” For him, the right use of power is for the good of the citizens and a leader’s use of power for “his own private separate advantage” is exercising that power “beyond right.” Locke also presents some other key points about tyranny, noting that it occurs when “the governor, however entitled:

 

  • Makes his will and not the law the rule
  • Does not direct his commands and actions to the preservation of the properties of his people.
  • Directs them to the satisfaction of his own ambition, revenge, covetousness, or any other irregular passion.”

 

Did the ordinance, and similar impositions, meet the definition? On the face of it, it did not. After all, the aim of the ordinance seemed to be for the good of the citizens: it was aimed at reducing the chances of infection. It was also aimed at allowing businesses and other public places to remain. That is, it was aimed at the preservation of the properties of the people. There is no evidence that those in office used the ordinance for their “own private separate advantage” or were trying to satisfy some “irregular passion.”

It could be argued that while the objectives of the ordinance were not tyrannical, the ordinance involved exercising power “beyond right.” That is, the ordinance overstepped the legitimate limits of the power of the governing body. Since I am not a lawyer, I will focus on the moral aspect: do authorities have the moral right to impose a mask requirement or similar health measure on the people?

While people tend to answer in terms of their likes and dislikes, I will follow J.S. Mill and use principles I consistently apply in cases of liberty versus safety. As in all such cases, my first area of inquiry is into the effectiveness of the proposed safety measures. After all, if we are giving up liberty to gain nothing, this would be both foolish and wrong.

While there is some debate over the effectiveness of masks, the consensus of experts is that they do help prevent the spread of the virus. There is also the intuitively plausible argument that face coverings reduce the spread of the virus because they reduce the volume and distance of expulsions. They also block some of what is incoming. Medical professionals have long used these masks for these reasons. In future pandemics, we will also need to evaluate the effectiveness of proposed measures in good faith.

But wearing a mask is not without its costs. Aside from the cost of buying or making masks, they are uncomfortable to wear, they interfere with conversations, and it is hard to look good in a mask. While breathing does require a tiny bit more effort, this is generally not a significant factor for most. Those with pre-existing conditions impacting their breathing are more likely to be severely impacted by COVID-19—but they will need to rationally weigh the relative risks. Anecdotally, I did not find the masks problematic for normal wear, but I used to run wearing a face mask during the Maine winters to keep my face from freezing. That said, the “paper” masks were uncomfortable to wear when they were soaked with sweat, but I was almost always able to rely on distancing while running.

Weighing the effectiveness of the masks against the harm, they seem to have had a decisive safety advantage: by enduring some minor discomfort for short periods of time you could reduce your risk of being infected with a potentially lethal disease. You also reduced the risk of infecting others. Again, whatever measures are proposed during the next pandemic will also need to be assessed in this way.

The second issue to address is whether the gain in safety warrants the imposition on liberty. After all, some people did not want to wear masks, and it is an imposition to require this under the threat of punishment. My go to guide on this is the principle of harm presented by J.S. Mill.

Mill contends that “the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others.” I will rely on Mill’s arguments for his principle but agree it can be criticized in favor of alternative principles.

During the discussion of his principle Mill argues that we (collectively) have no right to infringe on a person’s liberty just because doing so would be good for them or even to prevent them from harming themselves. As long as their actions impact only themselves, their liberty is absolute. Applying this to the masks, if they only served to protect a person from infection, then Mill’s principle would forbid their imposition: people have the liberty of self-harm. If this had been true, I would have agreed with those who saw masks as tyranny: they have the moral right to put themselves at risk if doing so does not harm others. As they say, their body, their choice.

To use an analogy, If I want to go shooting without wearing any eye protection (and I have medical insurance), I have the right to be stupid and risk losing an eye. But the masks do more than protect the wearer; they also protect other people. If I go out without a mask and I am unaware I am infected, I am putting other people in greater danger—I am potentially harming them. As such, it is no longer just my business, it is their business as well.

Going back to the gun analogy, I do not have a right to just shoot my gun around whenever and wherever I want since doing so puts other people at risk of injury and death. I can be rightfully prevented from doing this. To use another analogy, while I think a person has the moral right to turn off their airbag in their car and face a greater risk of injury or death, they do not have the right to remove their brakes since that would put everyone in danger.

The obvious conclusion is that the imposition of masks was not tyranny. In fact, it is an excellent example of how the state should exercise its power: for the protection of the citizens based on the best available evidence. When the next pandemic arrives, the same approach should be taken. Assuming that the government tries to do anything to address it.

In the previous essay I discussed guilt by association. Not surprisingly, there is an equal but opposite temptation: to refuse to acknowledge bad elements in groups one likes. Giving in to this temptation can result in committing a version of the purity fallacy which could be called the Denial of Association.

This version of the fallacy occurs when a negative claim about a group based on certain members is rejected by asserting, without adequate support, that the alleged members are not true members of the group. This fallacy is also known as the No True Scotsman fallacy thanks to the philosopher Anthony Flew. For example, if a 2nd Amendment rights group is accused of being racist, they might say that those displaying racist symbols at their events were not real members. This version of the fallacy has the following form:

 

Premise 1: Negative claim P has been made about group G based on M members of G.

Premise 2: It is claimed, without support, that the members of M are not true members of G.

Conclusion: Claim P is false.

 

This reasoning is fallacious because simply asserting that problematic alleged members are not true members does not prove that the claim is not true about the group. As always, it is important to remember that fallacious reasoning does not entail that the conclusion is false. A group’s defender could commit this fallacy while their conclusion is correct; they would have simply failed to give a good reason to accept their claim.

Like many fallacies, it draws its persuasive power from psychological factors. Someone who has a positive view of the group has a psychological, but not logical, reason to reject the negative claim. Few are willing to believe negative things about groups they like or identify with. In Flew’s original example, a Scotsman refuses to believe a story about the bad behavior of other Scotsmen on the grounds that no true Scotsman would do such things. People can also reject the claim on pragmatic grounds, such as when doing so would provide a political advantage.

The main defense against this fallacy is to consider whether the negative claim is rejected on principled grounds or is rejected without evidence, such as on psychological or pragmatic grounds. One way to try to overcome a psychological bias is to ask what evidence exists to reject the counterexample. If there is no such evidence, then all that would be left are psychological or pragmatic reasons, which have no logical weight.

Sorting out who or what belongs in a group can be a matter of substantial debate. For example, when people displaying racist symbols show up at gun rights events or protests the question arises as to whether the protesters should be regarded, in general, as racist. Some might contend those openly displaying racist symbols should not define the broader group of protesters. Others contend that by tolerating the display of racist symbols the general group shows that it is racist. As another example, those peacefully protesting police violence generally disavow those who engage in violence and vandalism and claim that the violent protesters do not define their group. Others contend that because violence and looting sometimes occurs adjacent to or after peaceful protests, the protesters are violent looters. College students peacefully protesting Israel’s actions contend that they are not antisemitic and disavow antisemitism, but their right-wing critics claim they are antisemitic. In some cases, there are actual antisemites involved. In other cases, merely criticizing Israel is cast as antisemitic.

Debates over group membership need not be fallacious. If a principled argument is given to support the exclusion, then this fallacy is not committed. For example, if a fictional 2nd amendment rights organization “Anti-Racists for Gun Rights” (ARGR) was accused of being racist because people at their protest displayed racist symbols, showing that none of the racists were members of ARGR would not commit this fallacy.

As another example, if peaceful protesters show that those who engaged in violence and looting are not part of their group, then it would not be fallacious for them to reject the claim that they are violent on the grounds that those committing the violence are not in their group. As a third example, if college students peacefully protesting Israel show that the people shouting antisemitic slogans at the protest were neo-Nazis from off campus, then they would not be committing this fallacy.

Sorting out which people belong to a group and how the group should be defined can be challenging; but should be done in a principled way. To define a group by the worst of those associated with it runs the risk of committing the guilt by association fallacy. Denying that problematic members are not true members of a group runs the risk of committing the denial of association fallacy. While both fallacies are psychologically appealing and can be highly effective means of persuasion, they have no merit as arguments.

As a practical matter, the unprincipled use both fallacies in efforts to advance their goals in bad faith. After all, what matters to them is “winning” rather than what is true and good.

It is tempting to define a group you do not like by the worst people associated with it, but this can lead to committing the fallacy of guilt by association. To illustrate, conservative protests sometimes include people openly displaying racist symbols and this can lead leftists to conclude that all the protestors are racists. As another example, protests against Israel’s actions sometimes include people who make antisemitic statements, and this leads some people to categorize the protests as antisemitic. While this is often done in bad faith, people can sincerely make unwarranted inferences about protests from the worst people present.

Since people generally do not make their reasoning clear, it often must be reconstructed. One possible line of bad reasoning is the use of a hasty generalization. A hasty generalization occurs when a person draws a conclusion about a population based on a sample that is not large enough to adequately support the concussion. It has the following form:

 

Premise 1: Sample S (which is too small) is taken from population P.

Premise 2: In Sample S X% of the observed A’s are B’s.

Conclusion: X% of all A’s are B’s in Population P.

 

This is a fallacy because the sample is too small to warrant the inference. In the case of the protesters, inferring that most conservative protesters are racists based on some of them displaying racist symbols would be an error. Likewise, inferring that most people protesting Israel are antisemitic because some of them say antisemitic things would also be an error. At this point it is likely that someone is thinking that even if most conservative protesters are not open racists, they associate with them—thus warranting the inference that they are also guilty. Likewise, someone is probably thinking that people protesting Israel are guilty of antisemitism because of their association with antisemites. This leads us to the guilt by association fallacy.

The guilt by association fallacy has many variations but this version occurs when it is inferred that a group or individual has bad qualities because of their  (alleged) association with groups or individuals who have those qualities. The form of the fallacy is this:

 

Premise 1: Group or person A is associated with group or person B

Premise 2: Group or person B has (bad) qualities P, Q, R.

Conclusion: Group A has (bad) qualities P, Q, R.

 

The error is that the only evidence offered is the (alleged) association between the two. What is wanting is an adequate connection that justifies the inference. In the conservative protester example, the protesters might be associated with protesters displaying racist symbols, but this is not enough to warrant the conclusion that they are racists. More is needed than a mere association. The more is, as one would imagine, a matter of considerable debate: those who loath conservatives will tend to accept relatively weak evidence as supporting their biased view; those who like the protesters might be blind even to the strongest evidence. Likewise for people protesting Israel. But whatever standards are used to judge association, they must be applied consistently—whether one loathes or loves the group or person.

As noted above, people who have protested Israel have been accused of association with antisemites. But the same standards applied to conservative protesters need to be applied: to infer that because some protesters have been observed to be antisemitic then most (or all) are as well would commit the hasty generalization fallacy. Naturally, if there is evidence showing that most conservative protesters are racist or evidence showing that most (or all) people who protest Israel are antisemitic, then the fallacy would not be committed.

To infer that those protesting Israel are antisemitic because some associated with the protests are antisemitic would commit the guilt by association fallacy, just as the fallacy would be committed if one inferred that conservative protesters are racists because they are associated with racists. Obviously, if there is adequate evidence supporting these claims, then the fallacy would not be committed.

While most Americans initially supported the lockdown, a fraction of the population  engaged in (often armed) protests. While the topic of protests is primarily a matter for political philosophy and ethics, critical thinking applies here as well. Given the political success of the anti-health movement in America, we can expect protests against efforts to mitigate the next pandemic. Assuming that any efforts are made.

While the protests were miniscule in size relative to the population of the country, they attracted media attention—they made the national news regularly and the story was repeated and amplified. On the one hand, this makes sense: armed protests against efforts to protect Americans from the virus was news. On the other hand, media coverage is  disproportional to the size and importance of the protests.  The “mainstream” media is often attacked as having a liberal bias and while that can be debated, it the media does have a bias for stories that attract attention. Public and private news services need stories that draw an audience. Protests, especially by people who are armed, draw an audience.  It can also be argued that some news services have a political agenda that was served by covering such stories.

While it can be argued that such stories are worth covering in the news, disproportional coverage can lead people to commit the Spotlight Fallacy. This fallacy is committed when a person uncritically assumes that the degree of media coverage given to something is proportional to how often it occurs or its importance. It is also committed when it is uncritically assumed that the media coverage of a group is representative of the size or importance of the group.

 

Form 1

Premise 1: X receives extensive coverage in the media.

Conclusion: X occurs in a frequency or is important proportional to its coverage.

 

Form 2

Premise 1: People of type P or Group G receive extensive coverage in the media.

Conclusion: The coverage of P or G is proportional to how P and G represent the general population.

 

This line of reasoning is fallacious since the fact that someone or something attracts the most attention or coverage in the media does not mean that it is representative or that it is frequent or important.

It is like the fallacies Hasty Generalization, Biased Sample and Misleading Vividness because the error being made involves generalizing about a population based on an inadequate or flawed sample.

In the case of the lockdown protests, the protests were limited in occurrence and size, but the extent of media coverage conveyed the opposite. The defense against the Spotlight Fallacy is to look at the relevant statistics. As noted above, while the lockdown protests got a great deal of coverage, they were small events that were not widespread. This is not to say that they have no importance. As such we should look at such protests not through the magnifying glass of the media but through the corrective lenses of statistics. I now turn to an ad hominem attack on the protesters.

Some critics of the protesters pointed out that the protesters were also being manipulated by an astroturfing campaign. Astroturfing is a technique in which the true sponsors of a message or organization create the appearance that the message or organization is the result of grassroots activism. In the case of the lockdown protests, support and organization was being provided by individuals and groups supporting Trump’s re-election and who were more concerned with a return to making money than the safety of the American people.  While such astroturfing is a matter of concern, to reject the claims of the protesters because they are “protesting on AstroTurf” rather than standing on true grassroots would be to commit either an ad hominem or genetic fallacy.

An ad hominem fallacy occurs when a person’s claim is rejected because of some alleged irrelevant defect about the person. In very general terms, the fallacy has this form:

 

Premise 1: Person A makes claim C.

Premise 2: An irrelevant attack is made on A.

Conclusion: C is false.

 

This is a fallacy because attacking a person does not disprove the claim they have made. In the case of a lockdown protester, rejecting their claims because they might be manipulated by astroturfing would be a fallacy. As would rejecting their claims because of something one does not like about them.

If the claims made by the protesters as a group were rejected because of the astroturfing (or other irrelevant reasons) then the genetic fallacy would have been committed. A Genetic Fallacy is bad “reasoning” in which a perceived defect in the origin of a claim or thing is taken to be evidence that discredits the claim or thing itself. Whereas the ad hominem fallacy is literally against the person, the genetic fallacy applies to groups. The group form looks like this:

 

Premise 1: Group A makes claim C.

Premise 2: Group A has some alleged defect.

Conclusion: C is false.

 

While it is important to avoid committing fallacies against the protesters, it is also important to avoid committing fallacies in their favor. Both the ad hominem and genetic fallacy can obviously be committed against those who are critical of the protesters. For example, if someone dismisses the claim that the protesters are putting themselves and others at needless risk by asserting that the critic “hates Trump and freedom”, then they would be committing an ad hominem. The same will apply to future protests about responses to pandemics. Again, assuming there will be a response.

To many Americans the protests seemed not only odd, but dangerously crazy. This leads to the obvious question of why they occurred. While some might be tempted to insult and attack the protesters under the guise of analysis, I will focus on a neutral explanation that is relevant to critical thinking. This analysis should also be useful for thinking about the next pandemic.

One obvious reason for the protests is that the lockdown came with an extremely high price—people had good reason to dislike it and this could have motivated them to protest. But there is more to it than that. The protests were more than people expressing their concerns and worries about the lockdown. They were political statements and thoroughly entangled with other factors that included supporting Trump, anti-vaccination views, anti-abortion views, second-amendment rights and even some white nationalism. This is not to claim that every protester endorsed all the views expressed at the protests. Attending a protest about one thing does not entail that a person supports whatever is said by other protesters. Because people try to exploit protests for their own purpose it is important to distinguish the views held by various protestors to avoid falling into assigning guilt by association. That said, the protests were an expression of a polarized political view and it struck many as odd that people would be protesting basic pandemic precautions.

One driving force behind this was what I have been calling the Two Sides Problem. While there are many manifestations of this problem, the idea is that when there are two polarized sides, this provides fuel and accelerant to rhetoric and fallacies—thus making them more likely to occur. Another aspect of having two sides is that it is much easier to exploit and manipulate people by appealing to their membership in one group and their opposition to another.

In the case of the protests, there was a weaponization of public health. Those who recommend the lockdown are expert and there is a anti-expert bias in the United States. The weaponization of the crisis to help the political right followed the usual tactics: disinformation about the crisis, claims of hoaxes, scapegoating, anti-expert rhetoric, conspiracy theories and such. Part of what drove this was in-group bias: the cognitive bias that inclines people to assign positive qualities to their own group while assigning negative qualities to others. This also applies to accepting or rejecting claims.

This weaponization was not new or unique to the COVID-19 pandemic. American politics has been marked by politicizing and weaponizing so that one side can claim a short-term advantage at the cost of long-term harm. Critical thinking requires us to be aware of this and to be honest about the cost of allowing this to be a standard tool of politics.

While there were many aspects to the lockdown protests, one of the core justifications was that the lockdown was a violation of Constitutional rights. The constitutional aspect is a matter of law, and I leave that to experts in law to debate. There is also the ethical aspect—whether the lockdown is morally acceptable, and this issue can be cast in terms of moral rights.  This discussion would take us far afield into the realm of moral philosophy, but I will close with an analogy that might be worth considering.

While the protesters were against the lockdown in general, opposition to wearing masks was the focus of the complaints. While there was rational debate about the efficacy of masks, the moral argument advanced was that the state does not have the right to compel people to wear masks. It can also be presented in terms of people having rights that the state must respect. One possibility is that people have the right to decide what parts of the body they wish to cover. If so, the obvious analogical argument is that if this right entitles people to go without masks, it also entitles people to go without any clothes they choose not to wear. If imposing masks is an act of oppression, then so is imposing clothing in general.

Another possible right is the right to endanger others or at least freely expose other people to bodily “ejections” they do not wish to encounter. If there is such a right, then it could be argued that people have a right to fire their guns and drive as they wish, even if doing so is likely to harm or kill others. If there is a right to expose other people to physical bodily ejections that they do not want to be exposed to, then this would entail that people have the right to spit and urinate on other people. This all seems absurd.

As a practical matter, people are incredibly inconsistent when it comes to rights and restrictions, so I would expect some people to simply dismiss these analogies because they did not want to wear masks but probably do not want people running around naked. But if masks were an act of oppression, so are clothes.

When the next pandemic arrives, we can expect similar protests against efforts to combat it. But this assumes that efforts will be taken, which will depend on who is running America during the next pandemic.