I have two main goals in addressing the question of why the right lies so often. The first is to satisfy my curiosity as a philosopher who teaches ethics, epistemology, and critical thinking. While there is little point in trying to get the liars to stop lying, my second goal is to encourage honest people on the right to look at their claims critically.

What I am asking of the honest folks of the right is to act in accord with Ben Shapiro’s famous saying: “facts don’t care about your feelings.” If you are an honest person on the right and you believe this, then you should engage with the claims made by yourself and your fellows in accord with this professed view: hold your feelings in check and consider what the evidence supports. There is also the popular YouTube pastime of destroying the liberals with facts and logic. Consistency requires that honest folks on the right subject the claims of the conservatives to the same treatment. Or, rather, to the treatment promised in the memes: to check the claims rigorously in accord with the principles of critical thinking and to make use of non-fallacious logic. The truth can withstand scrutiny, and good reasoning can hold up when assessed. As such, if an honest person on the right is sure that a claim made by their fellows is true, then they should not fear subjecting it to rigorous and objective evaluation. Likewise, if an argument made by a fellow conservative is strong (or valid), then you should not hesitate to put it to the test.

Before getting to the substantial content, I need to pre-empt some likely fallacious attacks. One likely attack is to respond by saying “what about the left?” or “What about Stalin? He lied all the time!” This is the rhetorical strategy and fallacy known as “whataboutism.”A second likely response is to say that everyone lies, that the practice is a common one. This is, obviously enough, just the fallacy of common practice. A third likely response is to assert that the left also lies and so they are just as bad as the right. This is a false equivalence. To avoid straw man attacks, I must make it clear that I am not claiming that the American right always lies or that the American left never lies. My claim is that lying is common on the right and my inquiry is into why this is the case. Feel free to destroy the lying left with facts and logic in your own blog or video.

The first step is providing a definition of “lying.” While one could discuss the fine nuances of the concept at length, my discussion only requires an intuitively plausible definition: lying is intentionally asserting that a claim is true when one believes it is false. This is not a perfect definition of the concept, but my discussion does not hinge on any nuanced or fine distinctions. I would, of course, be happy to get into a deep discussion of the complexities of defining the concept at another time.

The second step is providing evidence that the right, under the common usage of the term, lies often. A good place to start is at the top with President Trump. To be fair to the right, a good case can be made that Trump is not committed to conservative ideology, he has shifted his political affiliations often and one can easily imagine a parallel  world in which Trump was elected as Democrat. But Trump has conquered the American right and is their undisputed king. Trump is an epic level liar and his lies are thoroughly documented.

The folks at Fox News lie regularly, as the Daily Show showed back in 2015 with their “50 Fox News lies in 6 seconds.” The dishonesty of Fox News is well documented.  This is important because Fox News is the defining news of the right, although it now has some competition.

Another realm to consider is the brain of the right, the self-proclaimed intellectuals and thinkers of the right. The best known are Ben Shapiro and Jordan Peterson. The YouTube intellectuals of the right are interesting in that they do what politicians and pundits generally do not: they lay out their views at length, they endeavor to argue, and they purport to offer evidence for their claims. While these folks generally do not hold political offices or positions of power, they present and defend the views of the right. As such, they are the ones that can be engaged in some form  intellectual combat.

There is an entire YouTube industry of left-wing tubers who meticulously go through the claims and arguments of the right. The usual approach is the same used by professional scholars: checking the sources and assessing the reasoning. One common occurrence is that the sources used in the right-wing video are often either problematic (biased) or clearly misused (the right-winger is wrong about what the source shows). One example of this is Shaun’s look at Lauren Southern’s claims about the great replacement conspiracy theory. As another example, the sarcastic Some More News goes through Ben Shapiro’s claims about systemic racism and notes the lies. While one might be tempted to use an ad hominem and dismiss the lefty critics on the grounds that they are lefties, the competent critics follow the good practices of citing sources, referencing the original in context, and assessing arguments in accord with the standards of good logic. As such, one can engage these works and check their claims and reasoning. In general, the right-wing thinkers seem to routinely make many false claims. These videos are all available on YouTube and you can go through each one yourself, checking the claims and assessing the logic. Destroying is an option, if you are into that.

The third step is to answer the question of why the right lies so often.  One easy and obvious answer is that it works: conservatives seem to be more susceptible to certain lies than liberals. There are also answers to be found in the realm of psychology as to why, for example, Trump lies. But my concern here is with the rational reasons why the right engages in lying as a strategy.

The first reason is that facts and reality often conflict with interests and ideology of the right. It also conflicts with their claims of success. To be more specific, the claims used to argue for the policies and views of the right are often not true and they know it. The claims used to defend the effectiveness of these policies are also often untrue and they know it. As Stephen Colbert said, reality has a well-known liberal bias.

Two classic examples of this are the right’s notion that tax cuts pay for themselves and climate change denial. We have had decades of empirical testing of the claim that tax cuts pay for themselves: you can examine the results yourself and find out that they do not. The truth of climate change has been established beyond a reasonable doubt. If the right were honest about tax cuts or climate change, they would have a difficult time making their case for their policies or “showing” that their policies are effective. Imagine if Republicans were honest about tax cuts or climate change; they would be unable to make much of a case in favor of their tax cut and climate policies.

As another example, consider the right’s claims about voter fraud. While the right asserts that voter fraud is a problem, the evidence does not support their claim. The means they propose to combat this almost non-existent fraud are instead aimed at voter suppression. If the right were honest about the extent of voter fraud, their argument would be utterly undercut. If voter fraud is almost non-existent, there would be little reason to accept their proposals to address it.

As a final example, consider Trump’s disastrous response to COVID-19. Trump’s main response was to lie. Bizarrely, Trump has made it clear that he would like less testing so that the United States would, falsely, look better. Trump is somewhat unusual in his tendency to sometimes be honest about his dishonesty. The COVID-19 response thus serves as a paradigm example of why the right lies so often: their policies and values often conflict with reality and the only way they can claim success is by lying.

A second reason that many on the right lie is that being honest about their values, beliefs and goals would either not win support or would even repulse most people. To illustrate, being honest about the effectiveness of tax cuts and who they benefit would generally not win much support from citizens. To use a more extreme example, consider the 1981 interview with Lee Atwater in which he lays out the southern strategy. Atwater makes it clear how lying about their racism is a strategy of the right:

 

You start out in 1954 by saying, “Nigger, nigger, nigger.” By 1968 you can’t say “nigger”—that hurts you, backfires. So you say stuff like, uh, forced busing, states’ rights, and all that stuff, and you’re getting so abstract. Now, you’re talking about cutting taxes, and all these things you’re talking about are totally economic things and a byproduct of them is, blacks get hurt worse than whites.… “We want to cut this,” is much more abstract than even the busing thing, uh, and a hell of a lot more abstract than “Nigger, nigger.”

 

Lest you think that this strategy of deceit is a thing of the past, consider Trump’s tweet about the Suburban Lifestyle: “I am happy to inform all of the people living their Suburban Lifestyle Dream that you will no longer be bothered or financially hurt by having low income housing built in your neighborhood……Your housing prices will go up based on the market, and crime will go down. I have rescinded the Obama-Biden AFFH Rule. Enjoy!” Classic Atwater.  As a final example, saying to voters that they want to suppress voters to gain an undemocratic advantage to hold onto power would not win widespread support from the public.

Thus, the right has excellent reasons to lie. The first is that their claims often clash with the facts and if they did not lie, they would have to admit they are wrong. The second is that being honest would often cause them to lose support.

For those who dislike the Democrats, the mainstream of that party generally does not rely on strategic lying. Rather, they make use of strategic silence: they simply say nothing. We can see how well that has been working for them.

During the last pandemic the ideological battle over masks only slightly surprised me. On the one hand, getting into a fight over wearing masks during a pandemic is like getting over a fight over having brakes on cars. On the other hand, people can fight over what is stupid to fight over and the right has been working hard to undermine trust in reality, facts and science. So, we ended up in a situation in which people in positions of authority embraced the anti-mask position. Or the “pro-choice” position for some. As is usually the case with culture war fights, the fight was not grounded in any consistently held and applied principles.

But to be fair, there are legitimate concerns about masks during the last pandemic. To illustrate, there were concerns about having enough of them, about their impact on the ability of students to learn and teachers to teach, as well as on the development of critical language and social skills that require being able to see faces. These are all matters that are worthy of serious consideration and can provide reasons to forgo masks provided proper precautions are taken. My concern is directed at the reasons given that were ill-founded and inconsistent. Yes, I’m planning ahead for the next pandemic.

Ironically, some people borrowed from the abortion debate and took the position of Jed Davis, the president of Parkview Christian’s school board. As Jed said, “We’re not trying to politicize masks by any means. Again, we’re not anti-mask, we’re pro-choice.” Along this vein, some people also made arguments based on liberty and Constitutional rights. In general terms, these arguments seem to be:

 

Premise 1: People have the right to choose what they wear.

Premise 2: Some people choose not to wear a mask.

Conclusion: These people have the right to not wear masks.

 

While an appeal to rights is appealing, there is the matter of consistent application. This can be used to test if the proponents of allowing people to forgo masks believe in their professed principle. It is also a moral requirement if they believe their professed principle, they must apply it consistently in relevantly similar cases. So, let us engage in a thought experiment and use the same reasoning with a slight change.

 

Premise 1: People have the right to choose what they wear.

Premise 2: Some people choose to wear shirts that say “Fuck.”

Conclusion: These people have the right to wear shirts that say “Fuck.”

 

I suspect that Jed Davis other school officials would not follow their professed “pro-choice” principle consistently and allow students to wear such shirts; but I could be wrong. Give it a try, kids.

It could be argued that there is a relevant difference: students are not supposed to wear shirts with “fuck” on them because the word is vulgar and could offend people. People are supposed to wear masks to protect themselves and others from pathogens. So, students should be able to forgo masks but must be prevented from wearing “fuck” shirts.

While there is a difference between masks and “fuck” shirts, this difference would seem to favor requiring masks. After all, if schools ban clothing like “fuck” shirts because they might offend, they should require masks because forgoing them can result in serious illness or even death. To be consistent, Jed Davis and his fellows would need to allow students to dress as they wish, including “fuck” shirts. Or he would need to maintain dress codes and require masks to protect people from harm.

It could be objected that the “fuck” shirt is not analogous. After all, the choice is to wear a “fuck” shirt versus a choice to not wear a mask. So, the right being advocated is not the freedom to wear what you want, but a freedom to not wear what you do not want to wear. So, I will modify the argument again:

 

Premise 1: People have the right to not wear what they do not want to wear.

Premise 2: Some people do not want to wear clothing.

Conclusion: These people have the right not to wear clothing they do not want to wear.

 

One could object, again, that clothes are not analogous to masks. This is true, clothes are not important tools in preventing the spread of the virus. As such, any argument that would support the right to choose to not wear a mask would serve to support the far less impactful right to choose to go shirtless in school. I suspect that Jed and his fellows would not allow the students to come to school without being fully dressed. But if they are pro-choice when it comes to masks, consistency requires they allow the same freedom across the board. Obviously enough, allowing students to dress as they wish would present no meaningful danger to others, forgoing a mask would.

If the freedom argument has any merit, then the maskless students must be allowed to wear “fuck” shirts and otherwise dress as they wish. Just to be clear, while I favor freedom of expression and oppose the tyranny of pants (in favor of wearing running shorts) what I am advocating is that students be compelled to wear masks if doing so would protect them and others from a pandemic.

While American mythology lauds fair competition and self-made heroes, our current system of inheritance creates unfair competition and being a self-made hero is all but impossible. One major part of the inheritance problem is the disparity it has created between white and black Americans. While most of those in positions to address this matter must be fine with it, if you believe in fair competition and equality of opportunity, then consistency requires that you also believe that this problem needs to be addressed.

Condensing history, white people have enjoyed numerous advantages gifted to them by the state. The Homestead Act of 1862 provided land that went mostly to white people. This land was acquired in large part, by the 1830 Indian Removal Act. Compensation was also paid to white slave owners after the civil war, but the infamous 40 acres and a mule remains an empty promise.  The 1935 Wagner Act gave unions the ability to engage in collective bargaining, and these unions were a great boon to white workers. But it permitted unions to exclude non-whites, which they usually did.

The Federal Housing Administration’s programs allowed millions of average white Americans to buy homes, while excluding black Americans. The national neighborhood appraisal system tied mortgage eligibility to race. Integrated communities were defined as being financial risk and ineligible for home loans, which is known as “redlining.” From 1934 to 1962, the government backed $120 billion in home loans with 98% going to whites. Even now, black and Latino mortgage applicants are still 60% more likely than whites to be rejected, controlling for factors other than race. One common response to such assertions is that while past racism was bad, the past is past. While this does have rhetorical appeal, it is fundamentally mistaken: the past influences and shapes the present. One obvious way this occurs is through inheritance: wealth accrued from slavery and from state handouts to white people have been passed down through the generations. This is not to deny obvious truths: some white people blow their inheritance, many white people are mired in poverty, and there are some rich black Americans. The problem is a general problem that is not disproved by individual exceptions.

Because of the policies and prejudices of the past, the average white family today has about eight times the assets of the average African American family. Even if families with the same current incomes are compared, white families have over double the wealth of Black families. A primary reason for this is inheritance.

Inheritance, obviously enough, enables a family to pass down wealth that can be used to provide competitive advantages. These can include funding for education and starting money for businesses. It also helps people endure difficult times, such as pandemics and recessions, better. As such, whites enjoy an unearned competitive advantage over blacks: they inherit an advantage. The advantage is also built on explicitly racist and discriminatory policies.

Some have called for reparations for past injustices and others vehemently oppose the idea. One stock objection is that reparations would take resources from living people to give them to other living people based on injustices committed by people who are long dead. While this objection can be countered, an easy way to get around it, and many others, is to adopt a plan focused on heavily taxing inheritance and using the revenue to directly counter past and present economic unfairness.

To win over consistent conservatives, the resources should be used to enhance the fair competition they claim to believe in. Examples of how the resources should be used include addressing funding inequities in education, addressing infrastructure inequities, and addressing disparities in mortgages. That is, providing people with a fair start so they compete in the free market alleged to be so beloved by conservatives.

When marketing the idea to conservatives, the emphasis should be on how people are now benefiting from what conservatives profess to loath: unearned handouts from the state and unfair advantages provided based on race given by the state. One can assume that people with such professed values will support this idea, otherwise one would suspect they are lying about their principles. The proposed plan would help remove unfair and unearned handouts to enable the competition to be reset. To use the obvious analogy, this would be like a sports official noticing athletes cheating and then responding by restoring fairness to the competition.

This proposal has many virtues including that it allows past economic injustices to be addressed in a painless manner: nothing will be taken away from any living person for what a dead person did. Rather, some people will receive less of an unearned gift. As such, they are not losing anything, they are simply getting less of something they do not deserve and did not earn. While some might claim this would hurt them, that would be an absurd response. It would be like getting a free cake and then being a little bitch because one did not get a thousand free cakes simply for being born.

As always, the devil is in the details. As noted in other essays, I am not proposing that inheritance be eliminated, nor am I arguing in favor of the state taking your grandma’s assault rifle collection and giving it to a poor family. The general idea is that inheritance should be taxed, and the tax rate should be the result of careful consideration of all the relevant factors, such as the average inheritance in the United States. The plan could also involve increasing the tax rate gradually over time, to reduce the “pain” and thus the fervor of the opposition. In any case, a rational and fair proposal would take considerable effort to design but would certainly be worth doing if we want to be serious when we speak of fairness and opportunity in the United States.

While my criticisms of inheritance might seem silly and, even worse, leftist, it is in perfect accord with professed American political philosophy and the foundation of capitalism. Our good dead friend Thomas Jefferson said, “A power to dispose of estates forever is manifestly absurd. The earth and the fulness of it belongs to every generation, and the preceding one can have no right to bind it up from posterity. Such extension of property is quite unnatural.” The Moses of capitalism, Adam Smith, said that “There is no point more difficult to account for than the right we conceive men to have to dispose of their goods after death.” As such, opposition to inheritance is American, conservative, and capitalistic. But this provides no reason to accept my view. What I will advance in this essay is an argument by intuition against inheritance using a fictional town called “Inheritance.”

Imagine, you have been hired as an IT person for Heritage, a company in the town of Inheritance. You pack up your belongings and drive to the town. You spend the first week getting up to speed with the company and are not at all surprised when you find that the top officers of the company are all family members and that the current owner is the son of the previous owner. You are, however, surprised to find out that almost everyone else who works for the company inherited their job. You are one of the few exceptions because the previous IT person quit, and their daughter did not want to inherit the job. This strikes you as rather odd. But a job is a job, and you are happy to be employed.

You learn that the town has an upcoming founders’ day celebration, and you sign up for the 5K. You find it a bit odd that the race entry form asks you for your best inherited 5K time, but chalk that up to some fun with the town’s name. You run a 18:36 5K and the next closest person crosses the line at 26:22. Since an iPad Pro is the first-place prize, you eagerly head towards the race director when she starts announcing the overall winner. You are shocked when the winner is the mayor of the town, Lisa Heritage. She ran a 28:36 and so you beat her by 10 minutes. Her time is announced as 17:34, which you know is not true. Thinking this is all a joke, you ask what is going on. Realizing you are the new guy in town, they explain that the place in the race is determined by either the runner’s actual time or their best inherited time. The mayor’s mother, who was also mayor, was a good runner and her PR in college for the 5K was 17:34. Her daughter inherited the time and can use it for the race. A bit upset by all this; you make a bad joke about her running for office. The locals laugh and say that all political offices are inherited. There is only an election if no one who can inherit the office wants it. You hope this is all some sort of elaborate prank.

During the winter you get a sinus infection and go to the town doctor. Looking at the medical degree on the wall, you see it was from Ohio State and the graduation date is 1971. Expecting to see an older doctor, you are surprised to see someone in their early twenties. You ask them about medical school, and they say they inherited the degree and the office. Incredulous, you ask if they have any medical training at all. They indignantly inform you that they have been practicing medicine since they were 17, when they inherited the degree. Looking up your symptoms on Google, they prescribe some antibiotics and send you on your way. What a strange town, you think…but a job is a job and it is not like you inherited a fortune.

In making this appeal to intuition, I am assuming that while you probably accept the inheritance of wealth and property (including businesses), you probably do not believe that all jobs should be inherited. You also probably reject the idea that political offices, race times, and degrees should be inheritable. But allowing inheritance of property and wealth while rejecting the other sorts of inheritance seems inconsistent. This is clear when it comes to inheriting jobs: while people generally accept it when family members inherit positions in a company (such as being the owner), it would be odd for the other jobs to be inheritable. Having a hereditary IT department would, for example, be odd.  The challenge is defending one while rejecting the other; at least if one wants to hold onto inheritance. 

While families do pass on political influence (the Clintons and Bushes were good examples of this), most Americans reject the idea of inheriting political offices. We did, after all, have a revolution to be done with kings and queens. There are reasons for taking this view that are grounded in democratic values; but there is also the idea that inherited offices would lead to harm arising from the concentration of power and lack of accountability. There is also the fact that such positions would be unearned. While democratic values would not apply to inherited wealth and property directly, concerns about concentrated power and unfair advantages would still apply.

In the case of the 5K , the fictional mayor did not earn their victory. But the same also applies to inheriting ownership of the company. They did not earn their position; it was a matter of the chance of birth. The matter of inheriting degrees is also clear: a degree is supposed to reflect that a person has learned things and thus has knowledge and skills. While this is not always true, an inherited degree is utterly unearned and provides no skills or knowledge. Allowing people to inherit degrees, especially in fields like medicine and engineering, would be disastrous.

One way to reply would be to bite the bullet and accept inheritance broadly. After all, human history has centuries of examples of cultures in which wealth, property, offices, and other things were inherited. This would be a consistent position.

A better approach would be to try to break the analogies—to argue that inheriting wealth and property differs in relevant ways that would defend it from the absurdity of other inheritances. I will, obviously, leave that task to the defenders of inheritance and will certainly accept good arguments with plausible premises that support this view.

Back in the 1980s I played Advanced Dungeons & Dragons. When you start out as a new character in the game you roll to see how much gold you get. You use that gold to buy your equipment, such as your sword and chain mail or mace and holy symbol. While the starting gold varies by character class, there were no differences in the economic classes of the characters. For example, all starting fighters rolled 5d4 and multiplied that roll by 10 to determine their gold.  For role-playing purposes, a player could make up their character’s background, including their social and economic class but it had no impact on their starting gold. D&D has largely stuck with this system and the Player’s Handbook does not have an economic class-based system of starting gold.

Like many other players, my AD&D  group experimented with adding economic classes: a player rolled on a table to determine their social and economic class. For example, a player character might be a peasant. The upper classes started with more gold and even some valuable items and some players even proposed allowing new characters to start out with magic items.

While initially popular, rolling for economic class did not last long. A player who rolled badly on the table could not afford even their basic gear while a player who rolled exceptionally well started off with an abundance of gold and gear. We did tinker a bit with the tables, but we quickly returned to the standard rules. People could still roll badly or well, but it lacked the extremes of an economic class table.

The main reason, obviously enough, for abandoning the economic class table was it ended up being unfair: while everyone did get to roll, the chart could put people at a disadvantage or grant an unfair advantage at the start of the campaign. And this was with a relatively moderate chart. Another DM I knew ran a campaign using a more extreme table; a player who rolled well could start out with a keep and henchmen. A player who rolled badly could start out as a peasant with but a few coppers and be unable to buy the gear they needed. The player who started with the keep was quite happy, the peasant player far less so.  But let us think a bit about using such tables.

Imagine you are playing in a D&D campaign that uses an economic class table based on the United States, with fantasy names in place of the modern economic class names. The DM is an economic nerd, so they work out the value of a gold piece relative to a dollar, work out the average wealth for each age group in each economic class, work out the percentage of the population in each group and so on.  From all this they create a chart that determines your character’s starting wealth based on what you inherit. As in the real world, what economic class you are born into is random. As such, you might roll well and start off with thousands of gold pieces. Or you might have a bad roll and start with a few copper pieces.

Imagine that you are playing a fighter and roll badly, starting with a few coins. You can buy a dagger, a shield and a sack and that is it. Another player, a magic user, rolls exceptionally well and starts with 70,000 gp. She can buy all the gear she needs and, if the DM allows, buy much more. Suppose that the other players start off with enough to buy basic gear. If the magic user is good aligned or understands the importance of having a properly equipped party, they will share their wealth. They might buy you a longsword, plate armor, and other gear so you can keep the monsters from killing them. If you are all working together and share your starting gold, then this is fine: an advantage for one party member is an advantage for all. But suppose your magic user is selfish and wants to keep all their starting wealth for themselves. While that could be a problem, if your party is still otherwise working together, this will not be too bad if you get your share of the XP and loot, you will eventually even things out.

But suppose the DM is running a campaign in which the players are competing. Rather than working together as a party, you are competing to kill monsters and loot dungeons. You set out with your dagger and shield and stab a few goblins. The magic user hires several NPCs to assist her, and they enable her to slaughter her way through a small dungeon, adding the treasure to her already considerable wealth. She hires more NPCs, gets better gear, and continues to tear her way through dungeon after dungeon. Meanwhile, you are still struggling. You eventually kill and loot enough goblins to buy a long sword and leather armor and move on to ambushing lone orcs. The magic user has moved on to slaughtering giants. Finally, you can take on a bugbear, as the magic user and her hired NPCs take down a blue dragon. The dragon hoard gives her even more wealth and power; she buys more magic items and hires more NPCS. When you are finally ready to face an ogre, she is smashing ancient red dragons and liches. She soon has a kingdom of her own and is about to start a war with another nation to gain even more wealth. The campaign comes to an end; she has won easily. The DM then announces a new campaign will start: your new character will be the child of your previous character, inheriting their meager wealth and starting the new campaign with that. The person who played the magic user starts out with her new character being a prince or princess, inheriting the wealth of the kingdom. Just imagine how that will go. The first campaign was extremely unfair, the second will take the unfairness to a new level.

As noted, in a cooperative D&D game, where everyone is working together and sharing resources for the good of the party, the starting advantage of one is an advantage for all. In a competitive game, a significant disparity in starting wealth provides an unearned and unfair advantage. Players in such a campaign would rightfully complain and insist on having a fair start. Otherwise, the game would be rigged in favor of the player who rolls the best at the start. The same is true of significant inheritance in the real world. A large inheritance provides a considerable unearned advantage.

It could be argued that a skilled player could overcome a bad starting roll, and an incompetent player could stupidly throw away their advantage. While this is true, the odds would be heavily against the bad roller and heavily in favor of the good roller. Just because a truly exceptional person with a bad start could beat a truly incompetent person with a vast lead hardly shows that the situation is fair. Now, someone might want to play the unfair game and might even get all the other players to accept it. But it would be absurd to say it is fair or that it allows for true competition. The same applies to the United States: we can accept an unfair system of inherited wealth, but we cannot claim that the game is fair or allows for true competition. Sure, some rise from humble origins and achieve fairy tale levels of success by going from peasant to a merchant prince. Some start with all the advantages and waste them, starting with silver spoons and ending up with plastic sporks. But most finish in accord with their start, the game playing out as one would expect. 

While Republicans defend inherited wealth, a principled conservative should want to reform inheritance, perhaps even radically. I will base my case on professed conservative principles about welfare. My use of the term “welfare” will be a sloppy, but necessary shorthand. After all, there is no official government program called “welfare.” Rather, it is a vague term used to collect a range of programs and policies in which public resources are provided to people. Now on to the conservative arguments against inheritance.

Way back in April, 2020 Senator Lindsey Graham argued public financial relief for the coronavirus would incentivize workers to leave their jobs.  While making this argument during a pandemic was new, it is a stock argument used against welfare. Rod Blum, a Republican representative from Iowa, said “Sometimes we need to force people to go to work. There will be no excuses for anyone who can work to sit at home and not work.” Donald Trump, whose fortune was built on inheritance, has said that “The person who is not working at all and has no intention of working at all is making more money and doing better than the person that’s working his and her ass off.” While this might sound like description of Trump, it was his criticism of welfare. In general terms, the conservative argument is that if a person receives welfare, then they would not have an incentive to work. Since this is bad, welfare should be restricted or perhaps even eliminated.

Conservatives also advance utilitarian arguments against welfare, arguing it is bad because of the harm it causes. In addition to allegedly destroying the incentive to work, it is also supposed to harm the moral character of the recipient and, on a larger scale, create a culture of dependency and a culture of entitlement. If we take these arguments seriously, then they would also tell against inheritance. In fact, this is an old argument in philosophy.

Mary Wollstonecraft contends that hereditary wealth is morally wrong because it produces idleness and impedes people from developing their virtues. This mirrors the conservative arguments against welfare, and they should, if they are consistent, agree with Wollstonecraft.

Conservatives also profess to favor the free market, meritocracy and earning one’s way. They speak often of how people should pull themselves up by bootstraps. In accord with these professed values, they oppose programs like affirmative action, free school lunch,  and food stamps. The usual arguments focus on two factors. The first is that such programs provide people with unearned resources and this is wrong. The second is that such programs provide people with an unfair advantage over others, which is also wrong. It is obvious that the same reasoning would apply to inheritance.

Inheritance is unearned. So, if receiving unearned resources is wrong, then inheritance would also be wrong. It could be countered that people can earn an inheritance, that it might be granted because of their hard work or some other relevant factor. While such cases would be worth considering, earning it by hard work is not the way one qualifies for an inheritance. However, an earned inheritance would certainly not be subject to this argument.

Disparities in inheritance also confer unearned advantages. For example, suppose that both of us want to open a business in our small town which can only support one business of that kind. For the sake of the arguments, let us assume we are roughly equal in abilities, so with a fair start there is about a 50% chance that either of us would win the competition. But suppose that I inherit $1,00,000 and you start out with a $1,000 loan from your parents. This provides me with a huge advantage. I can purchase more and better equipment. I can get a better location for my business. I can out-advertise you. I can bleed your business to death by taking a loss you cannot sustain. I will not say it is impossible for you to beat me and I can imagine scenarios in which I fail. For example, the townsfolk might rally to support you and boycott me because of my unfair advantage. But barring such made-for-tv miracles, I will almost certainly win.

Even if we were not in direct competition, I would still have a huge unearned advantage over you. If you decided to go to the next town over and I wished you well, I would still be more likely to succeed than you because my inheritance advantage would be considerable.  If receiving unmerited advantages is wrong, then significant inheritance would be wrong as well.

Since conservatives generally profess loath welfare and love inheritance, they would need a principled way to break the analogy between the two. There are ways to do this.

An argument can be built on claiming that inheritance is a voluntary gift of resources, while welfare involves taking tax money from people who do not want it to be used that way. The obvious reply is that if we vote for welfare (either directly or through representatives), then it is voluntary. This, of course, leads into the broader area of democratic decision making. But if we accept democracy and our democracy accepts welfare, then we agree to it in the same way we agree to any law or policy we might not like.

Another argument can be made by pointing out that inheritance often goes to relatives while welfare does not. But this is not relevant to the argument that welfare is bad because of its harm. After all, it is getting money that one has not earned that is supposed to be the problem, not whether it was given willingly or by a relative. One could try to argue that resources given by relatives are special and will not make people lazy while state resources will, but that seems absurd. Some will be tempted to argue that those who inherit wealth tend to be a better sort of people, but this seems an unreasonable path to follow.

Another argument can be made asserting that inherited wealth is earned in some manner while welfare is not. While this has some appeal, it falls apart quickly. First, some people do earn some of their welfare (broadly construed) by paying for it when they are working. For example, if Sally works for ten years paying taxes and gets fired when her company moves overseas, then she is getting back money from a system she contributed to. So, Sally earned that welfare.  Second, if a person did work for their inheritance, it is not actually an inheritance, but something earned. If, for example, someone worked in the family business for pay or shares in the company, then they have earned their pay or shares. But merely working there does not, obviously, entitle a person to own the business after the death of the current owner. Otherwise, the workers should all share in the inheritance. So, this sort of argument fails.

It might be pointed out that if someone opposes inheritance, then they must oppose welfare. One reply is to accept this. If welfare makes people idle and inflicts moral harm, then it would be as bad as inheritance and should be limited or eliminated. A second reply is to argue that welfare helps people in need and is analogous to family helping family in times of trouble rather than being analogous to inheritance, in which one simply receives regardless of need or merit.

To pre-empt some straw person attacks on my arguments, my view is not that inheritance should be eliminated. It would be absurd to argue that Sally cannot inherit her grandmother’s assault rifle collection. It would be foolish to argue that Sam cannot inherit his mom’s cabin where he learned to hunt deer. Rather, my view is consistent with the conservative arguments: inheritance should be reduced to a level that does not cause harm to those inheriting it and does not confer an unfair advantage. While a full theory of inheritance would require a book to develop, the core of my view is that inheritance should be taxed in a progressive manner and the tax income should be used to increase fair competition. A good place to start would be funding public schools. Funding low-interest loans for poor people creating businesses would also be a good option. This has the appeal that it takes nothing away from anyone who is still alive; people would merely get less unearned wealth. To use an analogy, it is like a tax on lottery winnings—people are would just get less for winning and would not be losing anything they earned. In this way, a tax on inheritance is morally better than income tax on wages, which takes away from what someone has earned.

It could be argued, as the Republicans often do, that taxing inheritance would be a double tax: the money or property is taxed, then taxed again when it is inherited. While this seems a clever argument, it has two obvious problems. The first is that the person inheriting the money or property is taxed only once. They do not pay a tax when the person dies and then pay another tax when they inherit it. Second, this alleged “double tax” is not unique to inheritance. When I make money, tax is taken out. When I buy a book on Amazon sold by a friend, I pay sales tax. When my friend gets their royalty, they pay a tax on that. If they buy a book of mine with the money they got from me buying their book, the money is taxed again when they pay the sales tax. Then I am taxed again when I pay my tax on my income. This is not to say that all this taxing is good, just that the notion that inheritances are subject to a unique and crazy double tax is absurd.

When people accuse some people of being a racist, they sometimes use a two-part strategy. First, they can deny this. They can even pretend they do not understand how language works or have any recollection of history. Second, they can accuse critics of being racists. This strategy is commonly used to “refute” criticism of sexism and racism. But does it have any merit as an argument? To assess it, I will use the principle of charity and try to make the best version of the argument.

One common opening move in the argument is to deny or downplay discrimination against minorities. Since I am trying to make the most plausible version of this argument, this version will not deny that discrimination and racism did exist. After all, the claim is not that racism and discrimination never existed, just that it either does not exist now or is far less bad than critics claim.

When making this case, the most plausible way is to point to civil rights and anti-discrimination laws. And, of course, one must mention President Obama. When critics point to modern examples of discrimination and racism, the counter is that while modern cases exist, they are rare. For example, one might acknowledge that there are racist police, but they are “a few bad apples.” One might accept that there are racists who say and do racist things, but they are a small number and when they discriminate or use violence, their actions are illegal. Because of this, one would argue, it is fair to criticize the specific racist cop or condemn that particular white supremacist who committed murder. But to speak of systematic racism would be to speak of something that, one might claim, no longer exists.

So, if there is no systematic racism, then those who criticize it are criticizing an illusion. Worse, the argument goes, those who speak of systematic racism involving whites are unfairly and wrongly accusing white people of crimes they are not committing. To illustrate, to speak of white privilege is to claim that white people enjoy advantages over minorities and is to accuse them of being engaged in systematic racism. But since there is no systematic racism on the part of whites, the accuser must be the real racist. After all, they are acting on an unfounded prejudice and attacking people based on their being white.  Thus, those who accuse white people of being bigots and racists are alleged to be the real bigots and racists.

One weak point of this version is that holding that racism is just limited to a “few bad apples” is that there is abundant evidence discrimination exists and is more than just a few bad apples. But this more plausible version can be incorporated into the argument. In this version, it can be accepted that racism against minorities does exist, but racism and against white people is on par with this racism. Roughly put, if it is implausible to deny the existence of racism against minorities, one can instead argue that whites are now equal (or greater) victims of racism.

On this revised version, a white person could accept racism exists, but insist they are not a racist and they are a victim of racism against whites. They would be victims, one infers, because they have been accused of being racist because they are white and not because evidence, they are racists. If they wished to go beyond defending themselves, they could contend whites in general are not racist and hence the critics of racism (against minorities) are bigoted against whites by condemning all whites because of some racist whites. But is this a good argument?

When assessing any argument, there are two general questions. The first is: “is the reasoning good?” The second is: “are the premises true or at least plausible?” One can reason well with untrue claims, reason badly with true claims and so on. Such, we need to assess the above reasoning both in terms of quality of the logic and the plausibility of the claims.

On the face of it, the logic of the argument mirrors good arguments about racism and discrimination:

 

Premise 1: Person P is accused of being X by person Q.

Premise 2: The only evidence given by Q for P being X is that they are of race R.

Premise 3: P’s being R is irrelevant to proving P is X.

Conclusion: Q is being a racist.

 

As an example:

 

Premise 1: Barry is accused of being a racist by Karen.

Premise 2: The only evidence given by Karen for Barry being a racist is that Barry is white.

Premise 3: Barry’s being white is irrelevant to proving Barry is a racist.

Conclusion: Karen is being a racist.

 

But what if someone presents evidence that P is X that is not just based on race?

 

Premise 1: Person P is accused of being X by person Q.

Premise 2: The evidence given by Q for P being X is E (which is not based on P’s race).

Premise 3: P denies E.

Conclusion: Q is being a racist.

 

This is clearly bad logic; although it also does not follow that Q is not being racist—it neither proves nor disproves this. To illustrate:

 

Premise 1: Barry is accused of being a racist by Karen.

Premise 2: The evidence given by Karen for Barry being a racist is an abundance of racist tweets, statements, policies, actions and so on.

Premise 3: Barry denies the evidence and says he is not a racist.

Conclusion: Karen is being a racist.

 

This is also bad logic; denying the evidence does not prove that Karen is a racist. She could, of course, be a racist—but this bad logic does nothing to prove it.

What some people accused of racism do, it seems, is trying to run the first argument. This would be smart, since the reasoning seems solid. Those critical of a racist would contend that the racist cannot use the first argument because the second premise is false. Instead, a racist can only use the second argument in which they deny they are a racist and claims their critics are racist.

The battle, as one would expect, comes down to the truth of the claims rather than the logic. For the “you’re the racist” defense to be a good argument (good logic and plausible premises), then he would need to establish key claims. The exact claims would depend on which specific strategy is being used. Those who claim that racism against minorities no longer exists would need to prove that.  This seems unlikely given the body of existing evidence. Those who claim that it is not as bad as is claimed by expert critics (and not straw people) would need to prove that. Those who claim that discrimination against whites exists and that it is comparable to racism against minorities would need to prove that. An alleged racist would also need to show that the evidence presented that they are racists does not support this claim. Finally, they would need to show that those accusing them of racism are acting from bigotry against whites. Just showing that they are not a racist would not show those accusing him are. They could be wrong, but it would not follow that they are racist.

 The overwhelming evidence is that we white people are not the victims of racial discrimination, despite the claim that we are the real victims of racism.  White people are, however, often right to see themselves as victims—of Trump’s policies.  Most white people do face challenges: corporations have moved jobs overseas, wages have been stagnant, health care is expensive, an opioid epidemic has been ravaging America, and the grotesque mismanagement of the pandemic did incredibly harm. But these are not the result of  white people being white nor of minorities discriminating against whites. Rather, these are the result of the political, economic and social system that has been crafted over the decades—one that hurts everyone who is not rich enough to fare well in this dystopia.

In another way, the matter is also resolved: the lines are drawn, the hats are on and few are switching teams at this late date.

In response to a video I did on D&D and racism, a viewer posted “yet another racist feeling guilt trying to project their racism onto others, but this one attempting to use logic and his “appeal to superiority” with his college knowledge…” I do not know whether this was sincere criticism or trolling, but the tactics are common enough to be worth addressing.

There is a lot going on in that single sentence, which is itself a rhetorical tactic analogous to throwing matches in a dry forest. Throwing matches is quick and easy; putting out the fires takes time and effort. But if they are not addressed, the “match thrower” can claim they have scored points. This creates a nasty dilemma: if you take time to respond to these matches, you are using way more time than the attacker, so even if you “win” you “win” little because they have invested so little in the attacks. If you do not respond, then they can claim victory. While this would also be an error on their part since a lack of response does not prove that a claim is correct, it could give them a rhetorical “victory.”

The references to using logic and “college knowledge” seem to be a tactic I have addressed before, which is the “argument against expertise.” It occurs when a person rejects a claim because it is made by an authority/expert and has the following form:

 

Premise 1: Authority/expert A makes claim C.

Conclusion: Claim C is false.

 

While experts can be wrong, to infer that an expert is wrong because they are an expert is absurd and an error in reasoning. This can be illustrated by a person concluding that there must be nothing wrong with their car solely because an expert mechanic said it had an engine issue. That would be bad reasoning.

The person is also using an ad hominem and a straw man attack. In the video I explicitly note that I am giving my credentials to establish credibility and note that I should not be believed simply because I am an expert in philosophy and gaming: my arguments stand or fall on their own merit. As such, the “appeal to superiority” is unfounded but provides an excellent example of combining a straw man with an ad hominem.  These are common bad faith tactics, and it is wise to know them for what they are. I now turn to the focus of this essay, which is the tactic of accusing critics of racism of being the real racists.

The easy part to address is the reference to guilt arising from being racist. Even someone is motivated by guilt, it is irrelevant to the truth of their claims and this is just another ad hominem attack. As far as projecting racism, this is just part of the claim that the critic of racism must be racist. While the accusation of racism can be seen as a rhetorical device, there does seem to be an implied argument behind it and some take the time to develop an argument for their accusation of racism. Let us look at some versions of this argument:

 

Premise 1: Person A makes criticism C about an aspect of racism or racist R.

Conclusion: Person A is a racist because of C.

 

While not a specific named fallacy, the conclusion does not follow from the premise. Consider the same sort of logic, which is obviously flawed:

 

Premise 1: Person A makes criticism C about an aspect of corruption or a corrupt person.

Conclusion: Person A is a corrupt person because of C.

 

Being critical of corruption or a corrupt person does not make you corrupt. While a corrupt person could be critical of corruption or another corrupt person, their criticism is not evidence of corruption. Two other bad arguments are as follows:

 

Premise 1: Person A makes criticism C about aspect of racism or racist R.

Premise 2: Person A is a racist because of C.

Conclusion: Criticism C is false.

 

This is obviously just an ad hominem attack: even if A was a racist, this has no bearing on the truth of C. Consider an argument with the same sort of reasoning:

 

Premise 1: Person A makes criticism C about an aspect of corruption or corrupt person R.

Premise: Person A is a corrupt person because of C.

Conclusion: Criticism C is false.

 

This is quite evidently bad logic; otherwise, anyone who criticized corruption would always be wrong.

 

A variant, equally bad, is this:

 

Premise 1: Person A makes criticism C about aspect of racism or racist R.

Premise 2: Person A is a racist because of C.

Conclusion: R is not racist.

 

While not a named fallacy, it is still bad logic: even if person A were a racist, it would not follow that R is not. Once again, consider the analogy with corruption:

 

Premise 1: Person A makes criticism C about an aspect of corruption or corrupt person R.

Conclusion: Person A is a corrupt person because of C.

Conclusion: R is not corrupt.

 

Again, the badness of this reasoning is evident: if it were good logic, any accusation of corruption would be automatically false. At this point it can be said that while these bad arguments are really used, perhaps there are some good arguments that prove that being critical of racism or racists makes a person a racist or proves their criticism is false.

I do agree that there are cases in which critics of certain types of racism are racists. An obvious example would be the Nation of Islam: they assert, on theological grounds, that blacks are innately superior to whites. Someone who believes this could be critical of racism against themselves and they would be a racist criticizing racism (of a specific type). But it is not their criticism of racism that makes them racist; it is their racism that makes them racist.

What is needed is an argument showing that being critical of racism makes someone a racist. That is, if the only information you had about any person was the full text of their criticism you would be able to reliably infer from the criticism that they are racist. Obviously enough, if the criticism contained racism (like a Nation of Islam member criticizing white racism because of their view that blacks are inherently superior to whites) one could do this easily. But to assume that every criticism of racism must contain racism because it is a criticism of racism would beg the question. Also, pointing to racists who make a criticism of racism and inferring that all critics who make that same criticism are thus racists would be to fall into the guilt by association fallacy. And, of course, even if a critic were racist, it would be an ad hominem to infer their criticism is thus false. A racist can rightfully accuse another racist of racism.

 While the “ideal” argument would show that all criticisms of racism make one racist (and, even “better”, disprove the criticism) such an argument would be suspiciously powerful: it would show that every critic of racism is a racist and perhaps automatically disprove any criticisms about racism. Probably the best way to argue for such an argument is to focus on showing that being critical of racism requires criticizing people based on their race and then making a case for why this is racist. The idea seems to be that being critical of racism requires accepting race and using it against other races (or one’s own), thus being racist. But this seems absurd if one considers the following analogy.

Imagine, if you will, a world even more absurd than our own. In this world, no one developed the idea of race. Instead, people were divided up by their earlobes. Broadly speaking, humans have two types of earlobes. One is the free earlobe—the lobe hangs beyond the attachment point of the ear to the head. The other is the attached earlobe: it attaches directly to the head. In this absurd world, the free lobed were lauded as better than the attached lobed. Free lobed scientists and writers asserted that the free lobed are smarter, more civilized, less prone to crime and so on for all  virtues. In contrast, the attached lobed were presented as bestial, savage, criminal, stupid and immoral.  And thus, lobism was born. The attached lobed were enslaved for a long period of time, then freed. After that, there were systematic efforts to oppress the attached lobed; though progress could not be denied. For example, a person with partially attached lobes was elected President. But there are still many problems attributed to lobism.

In this weird world some people are critical of lobism and argue that aside from the appearance of ear lobes, there is no biological difference between the groups. Would it make sense to infer that their criticism of lobism entails that they are lobists? That they have prejudice against the free lobed, discriminate against them and so on? Does it mean that they believe lobist claims are real: that the lobes determine all these other factors such as morality, intelligence and so on? Well, if critics of racism must be racists, then critics of lobism must be lobist. If one of us went into that world and were critical of lobism, then we would be lobists. This seems absurd: one can obviously be critical of lobism or racism without being a lobist or racist.

As noted in previous essays, Wizards of the Coast (WotC) created a stir when they posted an article on diversity and D&D. The company made some minor changes to the 2024 version of the game which generated some manufactured controversy.  The company took the approach of “portraying all the peoples of D&D in relatable ways and making it clear that they are as free as humans to decide who they are and what they do.” They also decided to make a change that “offers a way for a player to customize their character’s origin, including the option to change the ability score increases that come from being an elf, a dwarf, or one of D&D’s many other playable folk. This option emphasizes that each person in the game is an individual with capabilities all their own.”

While the AD&D Monster Manual allowed individual monsters to vary in alignment and Dungeon Masters have always broken racial stereotypes in their campaigns, there has also been a common practice to portray races and species in accord with established in-game stereotypes. Drow and orcs are traditionally monstrous and evil while elves and dwarves are usually friendly and good.

AD&D also established the idea that fantasy races have specific physical and mental traits. AD&D also set minimum and maximum scores for the game stats. For example, half-orcs have a maximum Intelligence score of 17, a Wisdom score limit of 14, and their highest possible Charisma is 12. The game also divided characters by sex; females of all the races could not be as strong as the males. A PC’s race also limited what class they could take and how far they could progress. Going back to half-orcs, they could not be druids, paladins, rangers, magic users, illusionists, or monks. They could be clerics, fighters or thieves, albeit with limits on their maximum level. They were, however, able to level without racial limits as assassins. This is why AD&D players are suspicious of half-orc PCs; they are probably evil assassins. As a side note, the only PCs I have killed as a player have been half-orc assassins who decided tried to assassinate me. Given that race has been such an important part of D&D, it is no wonder the changes upset some players.

While some assume all critics of the changes are racist, I will not make that mistake. There are good, non-racist arguments for not changing the game. The problem is that racists (or trolls using racism) also use the same arguments. A difference between the two, aside from the racism, is that honest critics are arguing in good faith while racists (and trolls using racism) are arguing in bad faith. The main distinction is in their goals: a good faith critic opposes the changes for reasons they give in public. Those arguing in bad faith conceal their true motives and goals.

Some claim the people making the bad faith arguments are probably just trolls and not racists. But this distinction does not matter. Consider the following analogy. Imagine Sally takes communion at church. The wine tastes odd and later someone Tweets at her “did u like the atheist piss in ur blood of Christ? Lol!” Consider these three options. First, the person does not have a real commitment to atheism and is just trolling Sally to get a reaction. Second, the person hates Sally personally and was out to get her. Third, the person is an atheist who hates religious people and went after Sally because she is religious.

On the one hand, the person’s motives do not really matter: Sally still drank their urine. That is, the harm done does not depend on why it is done.  On the other hand, one can debate the relative badness of the motivations—but this does not seem to change the harm. Going back to racism, the person’s motivation does not matter in terms of the harm they cause by defending and advancing racism. Now, to the argument.

A good-faith argument can be made by claiming there is in-game value of having distinct character races, such as allowing people different experiences. Just as having only one character class would be dull, only having one basic race to play would also be dull. So, just as the classes should be meaningfully different, so too should the fictional races. While there are legitimate concerns about how racists can exploit the idea that races differ in abilities, it can also be argued that people understand the distinction between the mechanics of the fantasy world and reality. It can also be argued that we can stop the slippery slope slide from accepting fantasy races as different while not embracing real-world racism. One could even make a positive argument: people playing the game get accustomed to fictional diversity and recognize that PCs of different types bring different strengths to the party, something that extends analogically to the real world.

Unfortunately, this same sort of argument can be used in bad faith. One tactic is to use this argument but then slide into alleged differences between real people and then slide into actual racism. As a concrete example, I have seen people begin with what seems to be a reasonable discussion of D&D races that soon becomes corrupt. One common racist (or troll) tactic is to start by bringing up how D&D has subraces for many PC races. There are subraces of elves, dwarves, halflings and others that have different abilities. The clever racist (or troll) will suggest there should be human subraces in the game. On the face of it, this seems fine: they are following what is already established in the game. At this point, the person could still be a non-racist who likes the idea of fantasy subraces and thinks it would be cool to have different options when they play a human. But the racist will move on to make references to real-world ethnic groups, asking how one would stat whites, Asians, African-Americans and so on. The person can insist that they are just following the logic of the game and they seem to be right. After all, if the game has many sub-races with meaningful differences, then the same could apply to humans. And this is exactly how a racist can exploit this aspect of the game. A persuasive racist can convince people that they never moved from discussing D&D into racism and they can use the honest critics as cover. This shows why the change has merit, it could deny racists a tool.

Being an old school gamer, I do like the idea of distinct races in games. This is because of the variety they offer for making characters. While I do not want to yield this to the racists, I can see the need for a change to counter the racists. This would be yet another thing made worse by racists.

A second argument is a reductio ad absurdum argument. The idea is to assume that something is true and then derive an absurdity or contradiction from this assumption. This shows that assumption is false. In the case of races in D&D, some people have noted that the proposed approach would logically lead to all creatures in the game being the same. One person, I recall, asserted that the proposed changes entail that tigers and beholders would have the same stats. Another person joked (?) that this would also mean that gnolls would be “friendly puppers.” The idea was, of course, to show that assuming the changes should be accepted would lead to absurd results: no one wants monsters to all have the same stats and no one wants all the game creatures to be good. 

While this could be a good faith argument, there are some concerns. One is that reducing the changes to absurdity in this manner seems to require using the slippery slope fallacy or at least hyperbole and the straw man fallacy. No one is seriously proposing to give all monsters the same statistics or that they will all be morally good. In terms of the slippery slope, no reason has been given that WotC would take the changes to these absurd extremes. At best these would be poor good faith arguments. Depending on where a person goes with them, they could also be bad faith arguments; after all, they do mirror the real-world racist arguments that claim it is absurd to think everyone is perfectly equal and then argue for racism.

I obviously do not think that all monsters should have identical stats nor that all monsters should be good. But this is consistent with the changes and one can easily adopt them and avoid the slippery slope slide into absurdity. In closing, whatever changes WotC makes to D&D, they have no control over what people can do in their own campaigns.

When the culture war opened a gaming front, I began to see racist posts in gaming groups on Facebook and other social media. Seeing these posts, I wondered whether they are made by gamers who are racists, racists who game or merely trolls (internet, not D&D).

Gamers who are racists are actual gamers that are also racists. Racists who play games (or pretend to play them) are doing so as a means to recruit others into racism. While right-wing hate groups recruit video gamers, there seems to be no significant research into recruitment through tabletop games like D&D. My discussion does not require any racists who game; all that is needed is gamers who are racist. Unfortunately, you can easily find them on social media.

An easy way to summon racists is to begin a discussion of diversity in gaming or mention of the revised 2024 rules. But surely there are non-racists who disagree with diversity in gaming and the changes WotC has made in the 2024 rules? Is it not hyperbole and a straw man to cast all critics of diversity as racists? This is a fair and excellent point: to assume every critic of diversity and the game changes is a racist would be bad reasoning. But while some racists are openly racist, others use stealth. They advance arguments that seem reasonable and non-racist while occasionally letting a hint of racism show through. But not so much racism that it cannot be plausibly denied.

There is also another problem: the honest non-racist critic and a stealthy racist will often advance the same arguments. So, what is the difference, other than the racism? The answer is that the critic is arguing in good faith while the racist is arguing in bad faith.

As a philosopher, I will start with the philosophical definition of an argument. In philosophy, an argument is a set of claims, one of which is supposed to be supported by the others. There are two types of claims in an argument. The first type of claim is the conclusion.  This is the claim that is supposed to be supported by the premises. A single argument has one and only one conclusion, although the conclusion of one argument can be used as a premise in another argument.

The second type of claim is the premise. A premise is a claim given as evidence or a reason to logically accept the conclusion. Aside from practical concerns, there is no limit to the number of premises in an argument. When assessing any argument there are two factors to consider: the quality of the premises and the quality of the reasoning. The objective of philosophical argumentation is to make a good argument with true (or at least plausible) premises. Roughly put, the goal is to reach the truth.

Philosophical argumentation is different from persuasion as the goal of persuasion is to get the audience to believe a claim whether it is true or false. As Aristotle noted, philosophical argumentation is weak as persuasion. Empty rhetoric and fallacies (errors in reasoning) have greater psychological force (though they lack all logical force). The stage is thus set to talk about bad faith.

The foundation of arguing in good faith is the acceptance of the philosophical definition of argument: the goal is to provide plausible premises and good reasoning to reach the truth. This entails that the person must avoid intentionally committing fallacies, knowingly making false claims, and misusing rhetoric. A person can, of course, still employ persuasive techniques. Good faith argumentation does not require debating like a stereotypical robot or being dull as dust. But good faith argumentation precludes knowingly substituting rhetoric for reasons. A person can, in good faith, argue badly and even unintentionally commit fallacies because a person can make bad arguments in good faith. A person can, obviously, also make untrue claims when arguing in good faith. But as long as these are errors  rather than lies and the person put in effort to check their claims, then they can still be arguing in good faith. 

Arguing in good faith also requires that the person be honest about whether they believe their claims and whether they believe their reasoning is good. A person need not believe what they are arguing for, since a person can advance an argument, they disagree with as part of a good faith discussion. For example, I routinely present arguments that oppose my own views when I am doing philosophy.

One must also be honest about one’s goals when arguing from good faith. To illustrate, a critic of changes to D&D who is open about their belief that they are detrimental to D&D would be acting in good faith. A racist who argues against changes in D&D hoping to lure people into racism while concealing their motives would be arguing in bad faith. As would be suspected, a clever racist will conceal their true motives when trying to radicalize the normies. There is also the possibility that a person is trolling. But if someone is trolling with racism it does not matter that they are a troll for they are still doing the racist’s work for them.

While there are objective methods for sorting out the quality of arguments and the truth of claims, determining motives and thoughts can be hard. As such, while I can easily tell when someone is committing an ad hominem fallacy, I cannot always tell when someone is engaged in bad faith argumentation. This is more in the field of psychology than philosophy as it involves discerning motives and intentions. However, sorting out motives and intents is something we all do, and we can divine from a person’s actions and words what their motives and intents might be. But we should use caution before accusing someone of arguing in bad faith and this accusation certainly should not be used as a bad faith tactic. To use accusations of bad faith as a rhetorical device or an ad hominem would be bad faith argumentation and would, of course, prove nothing. But why should people argue in good faith?

There are two broad reasons why people should do so. The first is ethical: arguing in good faith is being honest and arguing in bad faith is deceitful.  Obviously, one could counter this by arguing against honesty and in favor of deceit. The second is grounded in critical thinking: bad faith argumentation generally involves bad logic, untruths, and a lack of clarity. As such, arguing in good faith is ethical and rational. Bad faith argumentation is the opposite. Why, then, do people argue in bad faith?

One reason is that bad faith reasoning can work well as persuasion. If one rejects truth as the goal and instead focuses on winning, then bad faith argumentation would be the “better” choice. 

A second reason is that a person might risk harm, such as social backlash, for arguing their views in good faith. In such cases, hiding their views would be prudent. As a good example, a person who wants to get people to accept human rights in a dictatorship might argue in bad faith, hoping to “trojan horse” people into accepting their views. If they openly argued for human rights, they risk being imprisoned or killed. As an evil example, a racist might argue in bad faith, hoping to “trojan horse” people into accepting their views. If they were openly racist in a D&D Facebook group, they would face censure and might be kicked out of the group. So arguing acting in bad faith is the only way they will be able to poison the group from the inside. A third reason is that bad faith reasoning can lure people down a path they would not follow if it were honestly labeled. Such a use does raise moral questions; some might advance a utilitarian argument to defend its use for good while others might condemn such deceit even if it is alleged it is to achieve a good end.

In the next essay I will look at some arguments against some of WotC’s policies that can be made in good or bad faith