While American mythology lauds fair competition and self-made heroes, our current system of inheritance creates unfair competition and being a self-made hero is all but impossible. One major part of the inheritance problem is the disparity it has created between white and black Americans. While most of those in positions to address this matter must be fine with it, if you believe in fair competition and equality of opportunity, then consistency requires that you also believe that this problem needs to be addressed.

Condensing history, white people have enjoyed numerous advantages gifted to them by the state. The Homestead Act of 1862 provided land that went mostly to white people. This land was acquired in large part, by the 1830 Indian Removal Act. Compensation was also paid to white slave owners after the civil war, but the infamous 40 acres and a mule remains an empty promise.  The 1935 Wagner Act gave unions the ability to engage in collective bargaining, and these unions were a great boon to white workers. But it permitted unions to exclude non-whites, which they usually did.

The Federal Housing Administration’s programs allowed millions of average white Americans to buy homes, while excluding black Americans. The national neighborhood appraisal system tied mortgage eligibility to race. Integrated communities were defined as being financial risk and ineligible for home loans, which is known as “redlining.” From 1934 to 1962, the government backed $120 billion in home loans with 98% going to whites. Even now, black and Latino mortgage applicants are still 60% more likely than whites to be rejected, controlling for factors other than race. One common response to such assertions is that while past racism was bad, the past is past. While this does have rhetorical appeal, it is fundamentally mistaken: the past influences and shapes the present. One obvious way this occurs is through inheritance: wealth accrued from slavery and from state handouts to white people have been passed down through the generations. This is not to deny obvious truths: some white people blow their inheritance, many white people are mired in poverty, and there are some rich black Americans. The problem is a general problem that is not disproved by individual exceptions.

Because of the policies and prejudices of the past, the average white family today has about eight times the assets of the average African American family. Even if families with the same current incomes are compared, white families have over double the wealth of Black families. A primary reason for this is inheritance.

Inheritance, obviously enough, enables a family to pass down wealth that can be used to provide competitive advantages. These can include funding for education and starting money for businesses. It also helps people endure difficult times, such as pandemics and recessions, better. As such, whites enjoy an unearned competitive advantage over blacks: they inherit an advantage. The advantage is also built on explicitly racist and discriminatory policies.

Some have called for reparations for past injustices and others vehemently oppose the idea. One stock objection is that reparations would take resources from living people to give them to other living people based on injustices committed by people who are long dead. While this objection can be countered, an easy way to get around it, and many others, is to adopt a plan focused on heavily taxing inheritance and using the revenue to directly counter past and present economic unfairness.

To win over consistent conservatives, the resources should be used to enhance the fair competition they claim to believe in. Examples of how the resources should be used include addressing funding inequities in education, addressing infrastructure inequities, and addressing disparities in mortgages. That is, providing people with a fair start so they compete in the free market alleged to be so beloved by conservatives.

When marketing the idea to conservatives, the emphasis should be on how people are now benefiting from what conservatives profess to loath: unearned handouts from the state and unfair advantages provided based on race given by the state. One can assume that people with such professed values will support this idea, otherwise one would suspect they are lying about their principles. The proposed plan would help remove unfair and unearned handouts to enable the competition to be reset. To use the obvious analogy, this would be like a sports official noticing athletes cheating and then responding by restoring fairness to the competition.

This proposal has many virtues including that it allows past economic injustices to be addressed in a painless manner: nothing will be taken away from any living person for what a dead person did. Rather, some people will receive less of an unearned gift. As such, they are not losing anything, they are simply getting less of something they do not deserve and did not earn. While some might claim this would hurt them, that would be an absurd response. It would be like getting a free cake and then being a little bitch because one did not get a thousand free cakes simply for being born.

As always, the devil is in the details. As noted in other essays, I am not proposing that inheritance be eliminated, nor am I arguing in favor of the state taking your grandma’s assault rifle collection and giving it to a poor family. The general idea is that inheritance should be taxed, and the tax rate should be the result of careful consideration of all the relevant factors, such as the average inheritance in the United States. The plan could also involve increasing the tax rate gradually over time, to reduce the “pain” and thus the fervor of the opposition. In any case, a rational and fair proposal would take considerable effort to design but would certainly be worth doing if we want to be serious when we speak of fairness and opportunity in the United States.

While my criticisms of inheritance might seem silly and, even worse, leftist, it is in perfect accord with professed American political philosophy and the foundation of capitalism. Our good dead friend Thomas Jefferson said, “A power to dispose of estates forever is manifestly absurd. The earth and the fulness of it belongs to every generation, and the preceding one can have no right to bind it up from posterity. Such extension of property is quite unnatural.” The Moses of capitalism, Adam Smith, said that “There is no point more difficult to account for than the right we conceive men to have to dispose of their goods after death.” As such, opposition to inheritance is American, conservative, and capitalistic. But this provides no reason to accept my view. What I will advance in this essay is an argument by intuition against inheritance using a fictional town called “Inheritance.”

Imagine, you have been hired as an IT person for Heritage, a company in the town of Inheritance. You pack up your belongings and drive to the town. You spend the first week getting up to speed with the company and are not at all surprised when you find that the top officers of the company are all family members and that the current owner is the son of the previous owner. You are, however, surprised to find out that almost everyone else who works for the company inherited their job. You are one of the few exceptions because the previous IT person quit, and their daughter did not want to inherit the job. This strikes you as rather odd. But a job is a job, and you are happy to be employed.

You learn that the town has an upcoming founders’ day celebration, and you sign up for the 5K. You find it a bit odd that the race entry form asks you for your best inherited 5K time, but chalk that up to some fun with the town’s name. You run a 18:36 5K and the next closest person crosses the line at 26:22. Since an iPad Pro is the first-place prize, you eagerly head towards the race director when she starts announcing the overall winner. You are shocked when the winner is the mayor of the town, Lisa Heritage. She ran a 28:36 and so you beat her by 10 minutes. Her time is announced as 17:34, which you know is not true. Thinking this is all a joke, you ask what is going on. Realizing you are the new guy in town, they explain that the place in the race is determined by either the runner’s actual time or their best inherited time. The mayor’s mother, who was also mayor, was a good runner and her PR in college for the 5K was 17:34. Her daughter inherited the time and can use it for the race. A bit upset by all this; you make a bad joke about her running for office. The locals laugh and say that all political offices are inherited. There is only an election if no one who can inherit the office wants it. You hope this is all some sort of elaborate prank.

During the winter you get a sinus infection and go to the town doctor. Looking at the medical degree on the wall, you see it was from Ohio State and the graduation date is 1971. Expecting to see an older doctor, you are surprised to see someone in their early twenties. You ask them about medical school, and they say they inherited the degree and the office. Incredulous, you ask if they have any medical training at all. They indignantly inform you that they have been practicing medicine since they were 17, when they inherited the degree. Looking up your symptoms on Google, they prescribe some antibiotics and send you on your way. What a strange town, you think…but a job is a job and it is not like you inherited a fortune.

In making this appeal to intuition, I am assuming that while you probably accept the inheritance of wealth and property (including businesses), you probably do not believe that all jobs should be inherited. You also probably reject the idea that political offices, race times, and degrees should be inheritable. But allowing inheritance of property and wealth while rejecting the other sorts of inheritance seems inconsistent. This is clear when it comes to inheriting jobs: while people generally accept it when family members inherit positions in a company (such as being the owner), it would be odd for the other jobs to be inheritable. Having a hereditary IT department would, for example, be odd.  The challenge is defending one while rejecting the other; at least if one wants to hold onto inheritance. 

While families do pass on political influence (the Clintons and Bushes were good examples of this), most Americans reject the idea of inheriting political offices. We did, after all, have a revolution to be done with kings and queens. There are reasons for taking this view that are grounded in democratic values; but there is also the idea that inherited offices would lead to harm arising from the concentration of power and lack of accountability. There is also the fact that such positions would be unearned. While democratic values would not apply to inherited wealth and property directly, concerns about concentrated power and unfair advantages would still apply.

In the case of the 5K , the fictional mayor did not earn their victory. But the same also applies to inheriting ownership of the company. They did not earn their position; it was a matter of the chance of birth. The matter of inheriting degrees is also clear: a degree is supposed to reflect that a person has learned things and thus has knowledge and skills. While this is not always true, an inherited degree is utterly unearned and provides no skills or knowledge. Allowing people to inherit degrees, especially in fields like medicine and engineering, would be disastrous.

One way to reply would be to bite the bullet and accept inheritance broadly. After all, human history has centuries of examples of cultures in which wealth, property, offices, and other things were inherited. This would be a consistent position.

A better approach would be to try to break the analogies—to argue that inheriting wealth and property differs in relevant ways that would defend it from the absurdity of other inheritances. I will, obviously, leave that task to the defenders of inheritance and will certainly accept good arguments with plausible premises that support this view.

Back in the 1980s I played Advanced Dungeons & Dragons. When you start out as a new character in the game you roll to see how much gold you get. You use that gold to buy your equipment, such as your sword and chain mail or mace and holy symbol. While the starting gold varies by character class, there were no differences in the economic classes of the characters. For example, all starting fighters rolled 5d4 and multiplied that roll by 10 to determine their gold.  For role-playing purposes, a player could make up their character’s background, including their social and economic class but it had no impact on their starting gold. D&D has largely stuck with this system and the Player’s Handbook does not have an economic class-based system of starting gold.

Like many other players, my AD&D  group experimented with adding economic classes: a player rolled on a table to determine their social and economic class. For example, a player character might be a peasant. The upper classes started with more gold and even some valuable items and some players even proposed allowing new characters to start out with magic items.

While initially popular, rolling for economic class did not last long. A player who rolled badly on the table could not afford even their basic gear while a player who rolled exceptionally well started off with an abundance of gold and gear. We did tinker a bit with the tables, but we quickly returned to the standard rules. People could still roll badly or well, but it lacked the extremes of an economic class table.

The main reason, obviously enough, for abandoning the economic class table was it ended up being unfair: while everyone did get to roll, the chart could put people at a disadvantage or grant an unfair advantage at the start of the campaign. And this was with a relatively moderate chart. Another DM I knew ran a campaign using a more extreme table; a player who rolled well could start out with a keep and henchmen. A player who rolled badly could start out as a peasant with but a few coppers and be unable to buy the gear they needed. The player who started with the keep was quite happy, the peasant player far less so.  But let us think a bit about using such tables.

Imagine you are playing in a D&D campaign that uses an economic class table based on the United States, with fantasy names in place of the modern economic class names. The DM is an economic nerd, so they work out the value of a gold piece relative to a dollar, work out the average wealth for each age group in each economic class, work out the percentage of the population in each group and so on.  From all this they create a chart that determines your character’s starting wealth based on what you inherit. As in the real world, what economic class you are born into is random. As such, you might roll well and start off with thousands of gold pieces. Or you might have a bad roll and start with a few copper pieces.

Imagine that you are playing a fighter and roll badly, starting with a few coins. You can buy a dagger, a shield and a sack and that is it. Another player, a magic user, rolls exceptionally well and starts with 70,000 gp. She can buy all the gear she needs and, if the DM allows, buy much more. Suppose that the other players start off with enough to buy basic gear. If the magic user is good aligned or understands the importance of having a properly equipped party, they will share their wealth. They might buy you a longsword, plate armor, and other gear so you can keep the monsters from killing them. If you are all working together and share your starting gold, then this is fine: an advantage for one party member is an advantage for all. But suppose your magic user is selfish and wants to keep all their starting wealth for themselves. While that could be a problem, if your party is still otherwise working together, this will not be too bad if you get your share of the XP and loot, you will eventually even things out.

But suppose the DM is running a campaign in which the players are competing. Rather than working together as a party, you are competing to kill monsters and loot dungeons. You set out with your dagger and shield and stab a few goblins. The magic user hires several NPCs to assist her, and they enable her to slaughter her way through a small dungeon, adding the treasure to her already considerable wealth. She hires more NPCs, gets better gear, and continues to tear her way through dungeon after dungeon. Meanwhile, you are still struggling. You eventually kill and loot enough goblins to buy a long sword and leather armor and move on to ambushing lone orcs. The magic user has moved on to slaughtering giants. Finally, you can take on a bugbear, as the magic user and her hired NPCs take down a blue dragon. The dragon hoard gives her even more wealth and power; she buys more magic items and hires more NPCS. When you are finally ready to face an ogre, she is smashing ancient red dragons and liches. She soon has a kingdom of her own and is about to start a war with another nation to gain even more wealth. The campaign comes to an end; she has won easily. The DM then announces a new campaign will start: your new character will be the child of your previous character, inheriting their meager wealth and starting the new campaign with that. The person who played the magic user starts out with her new character being a prince or princess, inheriting the wealth of the kingdom. Just imagine how that will go. The first campaign was extremely unfair, the second will take the unfairness to a new level.

As noted, in a cooperative D&D game, where everyone is working together and sharing resources for the good of the party, the starting advantage of one is an advantage for all. In a competitive game, a significant disparity in starting wealth provides an unearned and unfair advantage. Players in such a campaign would rightfully complain and insist on having a fair start. Otherwise, the game would be rigged in favor of the player who rolls the best at the start. The same is true of significant inheritance in the real world. A large inheritance provides a considerable unearned advantage.

It could be argued that a skilled player could overcome a bad starting roll, and an incompetent player could stupidly throw away their advantage. While this is true, the odds would be heavily against the bad roller and heavily in favor of the good roller. Just because a truly exceptional person with a bad start could beat a truly incompetent person with a vast lead hardly shows that the situation is fair. Now, someone might want to play the unfair game and might even get all the other players to accept it. But it would be absurd to say it is fair or that it allows for true competition. The same applies to the United States: we can accept an unfair system of inherited wealth, but we cannot claim that the game is fair or allows for true competition. Sure, some rise from humble origins and achieve fairy tale levels of success by going from peasant to a merchant prince. Some start with all the advantages and waste them, starting with silver spoons and ending up with plastic sporks. But most finish in accord with their start, the game playing out as one would expect. 

While Republicans defend inherited wealth, a principled conservative should want to reform inheritance, perhaps even radically. I will base my case on professed conservative principles about welfare. My use of the term “welfare” will be a sloppy, but necessary shorthand. After all, there is no official government program called “welfare.” Rather, it is a vague term used to collect a range of programs and policies in which public resources are provided to people. Now on to the conservative arguments against inheritance.

Way back in April, 2020 Senator Lindsey Graham argued public financial relief for the coronavirus would incentivize workers to leave their jobs.  While making this argument during a pandemic was new, it is a stock argument used against welfare. Rod Blum, a Republican representative from Iowa, said “Sometimes we need to force people to go to work. There will be no excuses for anyone who can work to sit at home and not work.” Donald Trump, whose fortune was built on inheritance, has said that “The person who is not working at all and has no intention of working at all is making more money and doing better than the person that’s working his and her ass off.” While this might sound like description of Trump, it was his criticism of welfare. In general terms, the conservative argument is that if a person receives welfare, then they would not have an incentive to work. Since this is bad, welfare should be restricted or perhaps even eliminated.

Conservatives also advance utilitarian arguments against welfare, arguing it is bad because of the harm it causes. In addition to allegedly destroying the incentive to work, it is also supposed to harm the moral character of the recipient and, on a larger scale, create a culture of dependency and a culture of entitlement. If we take these arguments seriously, then they would also tell against inheritance. In fact, this is an old argument in philosophy.

Mary Wollstonecraft contends that hereditary wealth is morally wrong because it produces idleness and impedes people from developing their virtues. This mirrors the conservative arguments against welfare, and they should, if they are consistent, agree with Wollstonecraft.

Conservatives also profess to favor the free market, meritocracy and earning one’s way. They speak often of how people should pull themselves up by bootstraps. In accord with these professed values, they oppose programs like affirmative action, free school lunch,  and food stamps. The usual arguments focus on two factors. The first is that such programs provide people with unearned resources and this is wrong. The second is that such programs provide people with an unfair advantage over others, which is also wrong. It is obvious that the same reasoning would apply to inheritance.

Inheritance is unearned. So, if receiving unearned resources is wrong, then inheritance would also be wrong. It could be countered that people can earn an inheritance, that it might be granted because of their hard work or some other relevant factor. While such cases would be worth considering, earning it by hard work is not the way one qualifies for an inheritance. However, an earned inheritance would certainly not be subject to this argument.

Disparities in inheritance also confer unearned advantages. For example, suppose that both of us want to open a business in our small town which can only support one business of that kind. For the sake of the arguments, let us assume we are roughly equal in abilities, so with a fair start there is about a 50% chance that either of us would win the competition. But suppose that I inherit $1,00,000 and you start out with a $1,000 loan from your parents. This provides me with a huge advantage. I can purchase more and better equipment. I can get a better location for my business. I can out-advertise you. I can bleed your business to death by taking a loss you cannot sustain. I will not say it is impossible for you to beat me and I can imagine scenarios in which I fail. For example, the townsfolk might rally to support you and boycott me because of my unfair advantage. But barring such made-for-tv miracles, I will almost certainly win.

Even if we were not in direct competition, I would still have a huge unearned advantage over you. If you decided to go to the next town over and I wished you well, I would still be more likely to succeed than you because my inheritance advantage would be considerable.  If receiving unmerited advantages is wrong, then significant inheritance would be wrong as well.

Since conservatives generally profess loath welfare and love inheritance, they would need a principled way to break the analogy between the two. There are ways to do this.

An argument can be built on claiming that inheritance is a voluntary gift of resources, while welfare involves taking tax money from people who do not want it to be used that way. The obvious reply is that if we vote for welfare (either directly or through representatives), then it is voluntary. This, of course, leads into the broader area of democratic decision making. But if we accept democracy and our democracy accepts welfare, then we agree to it in the same way we agree to any law or policy we might not like.

Another argument can be made by pointing out that inheritance often goes to relatives while welfare does not. But this is not relevant to the argument that welfare is bad because of its harm. After all, it is getting money that one has not earned that is supposed to be the problem, not whether it was given willingly or by a relative. One could try to argue that resources given by relatives are special and will not make people lazy while state resources will, but that seems absurd. Some will be tempted to argue that those who inherit wealth tend to be a better sort of people, but this seems an unreasonable path to follow.

Another argument can be made asserting that inherited wealth is earned in some manner while welfare is not. While this has some appeal, it falls apart quickly. First, some people do earn some of their welfare (broadly construed) by paying for it when they are working. For example, if Sally works for ten years paying taxes and gets fired when her company moves overseas, then she is getting back money from a system she contributed to. So, Sally earned that welfare.  Second, if a person did work for their inheritance, it is not actually an inheritance, but something earned. If, for example, someone worked in the family business for pay or shares in the company, then they have earned their pay or shares. But merely working there does not, obviously, entitle a person to own the business after the death of the current owner. Otherwise, the workers should all share in the inheritance. So, this sort of argument fails.

It might be pointed out that if someone opposes inheritance, then they must oppose welfare. One reply is to accept this. If welfare makes people idle and inflicts moral harm, then it would be as bad as inheritance and should be limited or eliminated. A second reply is to argue that welfare helps people in need and is analogous to family helping family in times of trouble rather than being analogous to inheritance, in which one simply receives regardless of need or merit.

To pre-empt some straw person attacks on my arguments, my view is not that inheritance should be eliminated. It would be absurd to argue that Sally cannot inherit her grandmother’s assault rifle collection. It would be foolish to argue that Sam cannot inherit his mom’s cabin where he learned to hunt deer. Rather, my view is consistent with the conservative arguments: inheritance should be reduced to a level that does not cause harm to those inheriting it and does not confer an unfair advantage. While a full theory of inheritance would require a book to develop, the core of my view is that inheritance should be taxed in a progressive manner and the tax income should be used to increase fair competition. A good place to start would be funding public schools. Funding low-interest loans for poor people creating businesses would also be a good option. This has the appeal that it takes nothing away from anyone who is still alive; people would merely get less unearned wealth. To use an analogy, it is like a tax on lottery winnings—people are would just get less for winning and would not be losing anything they earned. In this way, a tax on inheritance is morally better than income tax on wages, which takes away from what someone has earned.

It could be argued, as the Republicans often do, that taxing inheritance would be a double tax: the money or property is taxed, then taxed again when it is inherited. While this seems a clever argument, it has two obvious problems. The first is that the person inheriting the money or property is taxed only once. They do not pay a tax when the person dies and then pay another tax when they inherit it. Second, this alleged “double tax” is not unique to inheritance. When I make money, tax is taken out. When I buy a book on Amazon sold by a friend, I pay sales tax. When my friend gets their royalty, they pay a tax on that. If they buy a book of mine with the money they got from me buying their book, the money is taxed again when they pay the sales tax. Then I am taxed again when I pay my tax on my income. This is not to say that all this taxing is good, just that the notion that inheritances are subject to a unique and crazy double tax is absurd.

In response to a video I did on D&D and racism, a viewer posted “yet another racist feeling guilt trying to project their racism onto others, but this one attempting to use logic and his “appeal to superiority” with his college knowledge…” I do not know whether this was sincere criticism or trolling, but the tactics are common enough to be worth addressing.

There is a lot going on in that single sentence, which is itself a rhetorical tactic analogous to throwing matches in a dry forest. Throwing matches is quick and easy; putting out the fires takes time and effort. But if they are not addressed, the “match thrower” can claim they have scored points. This creates a nasty dilemma: if you take time to respond to these matches, you are using way more time than the attacker, so even if you “win” you “win” little because they have invested so little in the attacks. If you do not respond, then they can claim victory. While this would also be an error on their part since a lack of response does not prove that a claim is correct, it could give them a rhetorical “victory.”

The references to using logic and “college knowledge” seem to be a tactic I have addressed before, which is the “argument against expertise.” It occurs when a person rejects a claim because it is made by an authority/expert and has the following form:

 

Premise 1: Authority/expert A makes claim C.

Conclusion: Claim C is false.

 

While experts can be wrong, to infer that an expert is wrong because they are an expert is absurd and an error in reasoning. This can be illustrated by a person concluding that there must be nothing wrong with their car solely because an expert mechanic said it had an engine issue. That would be bad reasoning.

The person is also using an ad hominem and a straw man attack. In the video I explicitly note that I am giving my credentials to establish credibility and note that I should not be believed simply because I am an expert in philosophy and gaming: my arguments stand or fall on their own merit. As such, the “appeal to superiority” is unfounded but provides an excellent example of combining a straw man with an ad hominem.  These are common bad faith tactics, and it is wise to know them for what they are. I now turn to the focus of this essay, which is the tactic of accusing critics of racism of being the real racists.

The easy part to address is the reference to guilt arising from being racist. Even someone is motivated by guilt, it is irrelevant to the truth of their claims and this is just another ad hominem attack. As far as projecting racism, this is just part of the claim that the critic of racism must be racist. While the accusation of racism can be seen as a rhetorical device, there does seem to be an implied argument behind it and some take the time to develop an argument for their accusation of racism. Let us look at some versions of this argument:

 

Premise 1: Person A makes criticism C about an aspect of racism or racist R.

Conclusion: Person A is a racist because of C.

 

While not a specific named fallacy, the conclusion does not follow from the premise. Consider the same sort of logic, which is obviously flawed:

 

Premise 1: Person A makes criticism C about an aspect of corruption or a corrupt person.

Conclusion: Person A is a corrupt person because of C.

 

Being critical of corruption or a corrupt person does not make you corrupt. While a corrupt person could be critical of corruption or another corrupt person, their criticism is not evidence of corruption. Two other bad arguments are as follows:

 

Premise 1: Person A makes criticism C about aspect of racism or racist R.

Premise 2: Person A is a racist because of C.

Conclusion: Criticism C is false.

 

This is obviously just an ad hominem attack: even if A was a racist, this has no bearing on the truth of C. Consider an argument with the same sort of reasoning:

 

Premise 1: Person A makes criticism C about an aspect of corruption or corrupt person R.

Premise: Person A is a corrupt person because of C.

Conclusion: Criticism C is false.

 

This is quite evidently bad logic; otherwise, anyone who criticized corruption would always be wrong.

 

A variant, equally bad, is this:

 

Premise 1: Person A makes criticism C about aspect of racism or racist R.

Premise 2: Person A is a racist because of C.

Conclusion: R is not racist.

 

While not a named fallacy, it is still bad logic: even if person A were a racist, it would not follow that R is not. Once again, consider the analogy with corruption:

 

Premise 1: Person A makes criticism C about an aspect of corruption or corrupt person R.

Conclusion: Person A is a corrupt person because of C.

Conclusion: R is not corrupt.

 

Again, the badness of this reasoning is evident: if it were good logic, any accusation of corruption would be automatically false. At this point it can be said that while these bad arguments are really used, perhaps there are some good arguments that prove that being critical of racism or racists makes a person a racist or proves their criticism is false.

I do agree that there are cases in which critics of certain types of racism are racists. An obvious example would be the Nation of Islam: they assert, on theological grounds, that blacks are innately superior to whites. Someone who believes this could be critical of racism against themselves and they would be a racist criticizing racism (of a specific type). But it is not their criticism of racism that makes them racist; it is their racism that makes them racist.

What is needed is an argument showing that being critical of racism makes someone a racist. That is, if the only information you had about any person was the full text of their criticism you would be able to reliably infer from the criticism that they are racist. Obviously enough, if the criticism contained racism (like a Nation of Islam member criticizing white racism because of their view that blacks are inherently superior to whites) one could do this easily. But to assume that every criticism of racism must contain racism because it is a criticism of racism would beg the question. Also, pointing to racists who make a criticism of racism and inferring that all critics who make that same criticism are thus racists would be to fall into the guilt by association fallacy. And, of course, even if a critic were racist, it would be an ad hominem to infer their criticism is thus false. A racist can rightfully accuse another racist of racism.

 While the “ideal” argument would show that all criticisms of racism make one racist (and, even “better”, disprove the criticism) such an argument would be suspiciously powerful: it would show that every critic of racism is a racist and perhaps automatically disprove any criticisms about racism. Probably the best way to argue for such an argument is to focus on showing that being critical of racism requires criticizing people based on their race and then making a case for why this is racist. The idea seems to be that being critical of racism requires accepting race and using it against other races (or one’s own), thus being racist. But this seems absurd if one considers the following analogy.

Imagine, if you will, a world even more absurd than our own. In this world, no one developed the idea of race. Instead, people were divided up by their earlobes. Broadly speaking, humans have two types of earlobes. One is the free earlobe—the lobe hangs beyond the attachment point of the ear to the head. The other is the attached earlobe: it attaches directly to the head. In this absurd world, the free lobed were lauded as better than the attached lobed. Free lobed scientists and writers asserted that the free lobed are smarter, more civilized, less prone to crime and so on for all  virtues. In contrast, the attached lobed were presented as bestial, savage, criminal, stupid and immoral.  And thus, lobism was born. The attached lobed were enslaved for a long period of time, then freed. After that, there were systematic efforts to oppress the attached lobed; though progress could not be denied. For example, a person with partially attached lobes was elected President. But there are still many problems attributed to lobism.

In this weird world some people are critical of lobism and argue that aside from the appearance of ear lobes, there is no biological difference between the groups. Would it make sense to infer that their criticism of lobism entails that they are lobists? That they have prejudice against the free lobed, discriminate against them and so on? Does it mean that they believe lobist claims are real: that the lobes determine all these other factors such as morality, intelligence and so on? Well, if critics of racism must be racists, then critics of lobism must be lobist. If one of us went into that world and were critical of lobism, then we would be lobists. This seems absurd: one can obviously be critical of lobism or racism without being a lobist or racist.

As noted in previous essays, Wizards of the Coast (WotC) created a stir when they posted an article on diversity and D&D. The company made some minor changes to the 2024 version of the game which generated some manufactured controversy.  The company took the approach of “portraying all the peoples of D&D in relatable ways and making it clear that they are as free as humans to decide who they are and what they do.” They also decided to make a change that “offers a way for a player to customize their character’s origin, including the option to change the ability score increases that come from being an elf, a dwarf, or one of D&D’s many other playable folk. This option emphasizes that each person in the game is an individual with capabilities all their own.”

While the AD&D Monster Manual allowed individual monsters to vary in alignment and Dungeon Masters have always broken racial stereotypes in their campaigns, there has also been a common practice to portray races and species in accord with established in-game stereotypes. Drow and orcs are traditionally monstrous and evil while elves and dwarves are usually friendly and good.

AD&D also established the idea that fantasy races have specific physical and mental traits. AD&D also set minimum and maximum scores for the game stats. For example, half-orcs have a maximum Intelligence score of 17, a Wisdom score limit of 14, and their highest possible Charisma is 12. The game also divided characters by sex; females of all the races could not be as strong as the males. A PC’s race also limited what class they could take and how far they could progress. Going back to half-orcs, they could not be druids, paladins, rangers, magic users, illusionists, or monks. They could be clerics, fighters or thieves, albeit with limits on their maximum level. They were, however, able to level without racial limits as assassins. This is why AD&D players are suspicious of half-orc PCs; they are probably evil assassins. As a side note, the only PCs I have killed as a player have been half-orc assassins who decided tried to assassinate me. Given that race has been such an important part of D&D, it is no wonder the changes upset some players.

While some assume all critics of the changes are racist, I will not make that mistake. There are good, non-racist arguments for not changing the game. The problem is that racists (or trolls using racism) also use the same arguments. A difference between the two, aside from the racism, is that honest critics are arguing in good faith while racists (and trolls using racism) are arguing in bad faith. The main distinction is in their goals: a good faith critic opposes the changes for reasons they give in public. Those arguing in bad faith conceal their true motives and goals.

Some claim the people making the bad faith arguments are probably just trolls and not racists. But this distinction does not matter. Consider the following analogy. Imagine Sally takes communion at church. The wine tastes odd and later someone Tweets at her “did u like the atheist piss in ur blood of Christ? Lol!” Consider these three options. First, the person does not have a real commitment to atheism and is just trolling Sally to get a reaction. Second, the person hates Sally personally and was out to get her. Third, the person is an atheist who hates religious people and went after Sally because she is religious.

On the one hand, the person’s motives do not really matter: Sally still drank their urine. That is, the harm done does not depend on why it is done.  On the other hand, one can debate the relative badness of the motivations—but this does not seem to change the harm. Going back to racism, the person’s motivation does not matter in terms of the harm they cause by defending and advancing racism. Now, to the argument.

A good-faith argument can be made by claiming there is in-game value of having distinct character races, such as allowing people different experiences. Just as having only one character class would be dull, only having one basic race to play would also be dull. So, just as the classes should be meaningfully different, so too should the fictional races. While there are legitimate concerns about how racists can exploit the idea that races differ in abilities, it can also be argued that people understand the distinction between the mechanics of the fantasy world and reality. It can also be argued that we can stop the slippery slope slide from accepting fantasy races as different while not embracing real-world racism. One could even make a positive argument: people playing the game get accustomed to fictional diversity and recognize that PCs of different types bring different strengths to the party, something that extends analogically to the real world.

Unfortunately, this same sort of argument can be used in bad faith. One tactic is to use this argument but then slide into alleged differences between real people and then slide into actual racism. As a concrete example, I have seen people begin with what seems to be a reasonable discussion of D&D races that soon becomes corrupt. One common racist (or troll) tactic is to start by bringing up how D&D has subraces for many PC races. There are subraces of elves, dwarves, halflings and others that have different abilities. The clever racist (or troll) will suggest there should be human subraces in the game. On the face of it, this seems fine: they are following what is already established in the game. At this point, the person could still be a non-racist who likes the idea of fantasy subraces and thinks it would be cool to have different options when they play a human. But the racist will move on to make references to real-world ethnic groups, asking how one would stat whites, Asians, African-Americans and so on. The person can insist that they are just following the logic of the game and they seem to be right. After all, if the game has many sub-races with meaningful differences, then the same could apply to humans. And this is exactly how a racist can exploit this aspect of the game. A persuasive racist can convince people that they never moved from discussing D&D into racism and they can use the honest critics as cover. This shows why the change has merit, it could deny racists a tool.

Being an old school gamer, I do like the idea of distinct races in games. This is because of the variety they offer for making characters. While I do not want to yield this to the racists, I can see the need for a change to counter the racists. This would be yet another thing made worse by racists.

A second argument is a reductio ad absurdum argument. The idea is to assume that something is true and then derive an absurdity or contradiction from this assumption. This shows that assumption is false. In the case of races in D&D, some people have noted that the proposed approach would logically lead to all creatures in the game being the same. One person, I recall, asserted that the proposed changes entail that tigers and beholders would have the same stats. Another person joked (?) that this would also mean that gnolls would be “friendly puppers.” The idea was, of course, to show that assuming the changes should be accepted would lead to absurd results: no one wants monsters to all have the same stats and no one wants all the game creatures to be good. 

While this could be a good faith argument, there are some concerns. One is that reducing the changes to absurdity in this manner seems to require using the slippery slope fallacy or at least hyperbole and the straw man fallacy. No one is seriously proposing to give all monsters the same statistics or that they will all be morally good. In terms of the slippery slope, no reason has been given that WotC would take the changes to these absurd extremes. At best these would be poor good faith arguments. Depending on where a person goes with them, they could also be bad faith arguments; after all, they do mirror the real-world racist arguments that claim it is absurd to think everyone is perfectly equal and then argue for racism.

I obviously do not think that all monsters should have identical stats nor that all monsters should be good. But this is consistent with the changes and one can easily adopt them and avoid the slippery slope slide into absurdity. In closing, whatever changes WotC makes to D&D, they have no control over what people can do in their own campaigns.

A few years ago, Wizards of the Coast(WotC), who own Dungeons & Dragons, issued a statement on diversity. As would be expected, the responses split along ideological lines and the culture war continues to this day. The D&D front of the culture war is personal for me. I started playing D&D in 1979 and have been a professional gaming writer since 1989. This ties me into the gaming aspect of the war. I am also a philosophy professor, so this ties me into the moral and aesthetic aspects of this fight.

The statement made by WotC has three main points. The first addresses race in the real world. The second addresses the portrayal of fictional races, such as orcs and drow, within the game. The third addresses racism from the real world within the game, with the example of how a Romani-like people were portrayed in the Curse of Strahd. In this essay I will focus on the in-game issues.

Before getting to the in-game issues, I will pre-empt some of the fallacious arguments. While it is tempting to use straw man attacks and hyperbole in this war, WotC cannot prevent gamers from doing as they wish in their own games. If you want your orcs to be evil, vegans, mathematicians or purple, you can and there is nothing WotC or Hasbro can do. Any change of WotC policy towards D&D races (or species) only applies to WotC. As such, the only censorship issue applicable here is self-censorship.

As always in the culture war, there were (and are) ad hominem attacks on folks at WotC. Most of these attribute “wicked” motives to them and take these alleged motivations as relevant to the correctness of their claims. In some cases, the criticism is that WotC is engaged in “woke marketing” to sell more products. While this can be evaluated as a business strategy, it proves nothing about the correctness of their position. In other cases, those at WotC have been accused of being liberals who are making things soft and safe for the dainty liberal snowflakes. This is also just an ad hominem and proves nothing. One must engage with the actual claims rather than flail away with insults.

To be fair, one can raise legitimate questions about the ethics of the folks at WotC: their motives do matter when assessing them as people. If this is merely cynical snowflake marketing, then they could be criticized as hypocrites. But their motives are still irrelevant to the assessment of their position and plans. It is to this that I now turn.

While the Monster Manual from AD&D does allow for monsters to differ in alignment from their standard entries in the book, many fictional races in the game have long been presented as “monstrous and evil.” These famously include orcs and the drow (a type of elf). The concern expressed by WotC is that the descriptions of these fictional races mirror the way racism manifests in the real world. Their proposed fix was to portray “all the peoples of D&D in relatable ways and making it clear that they are as free as humans to decide who they are and what they do.” In the case of real-world racism manifesting in their products, such as the depiction of a fictional version of the Romani, they plan to rewrite some older content and ensure that future products are free of this sort of thing. These changes raise both moral and aesthetic concerns.

One way to defend the traditional portrayal of fictional races in D&D is to, obviously enough, appeal to tradition. Since Tolkien, orcs have been portrayed as evil. Since the G and D series of modules,  D&D drow have been evil. The obvious problem with this defense is that it the appeal to tradition is a fallacy, one I have addressed at length in other essays.

Another way to defend the idea that some fictional races are inherently evil (or at least almost always evil) is to use in-game metaphysics. Until recently, good and evil were objective aspects of the standard D&D world. Spells could detect good and evil, holy and unholy weapons inflicted damage upon creatures of opposing alignments, and certain magic impacted creatures based on their alignment. Demons and devils are, by their nature, evil in classic D&D. Angels and other celestials are, by nature, good in classic D&D. While alignment does have some role in D&D 5E, this role is miniscule by way of comparison.

In most D&D worlds, gods of good and evil exist and certain races were created by such gods. For example, the elves have mostly good deities, with the most obvious exception being the goddess Lolth, the queen of the demonweb pits. As such, the notion of races that are predominantly evil or good makes sense in such game worlds. As good and evil are metaphysically real, creatures could be imbued by divine and infernal powers with alignments.

While this defense does have its appeal, it raises an obvious concern: in the real-world people defend real racism with appeals to good and evil. They invoke creation stories to “prove” that certain people are better and others inferior. As the folks at WotC note, fantasy worlds often mirror the racism of the real world.

One reply to such concerns is to point out that most people can distinguish between the fictional world of D&D and the real world. Casting orcs and drow as evil and monstrous, even using language analogous to that used by racists in the real world, is nothing to be concerned about because people know the difference. The player who curses the “foul green skins” in game will not thus become a racist in the real world and curse the “wicked whites.” Thus, one might conclude, WotC stands refuted. There is, however, an ancient philosophical counter to this reply.

In the Republic Plato presents an argument for censorship based on the claim that art appeals to emotions and encourages people to give in to these emotions.  Giving way to these emotions is undesirable because it can lead to shameful or even dangerous behavior. On his view, viewing tragic plays might lead a person to give in to self-pity and behave poorly. Exposure to violent art might cause a person to yield more readily to the desire to commit violence. While Plato does not talk about racism (because the ancients had no such concept), his argument would apply here as well: engaging in fictional racism can lead people to racism in the real world. As such, Plato would presumably praise WotC for this action.

At this point it is reasonable to bring up the obvious analogy to video games. While the power of video games to influence ethics would seem to be an empirical matter, the current research is inconclusive because the “…evidence is all over the place” —so it currently comes down to a matter of battling intuitions regarding their power to influence. So, I will turn to Plato’s most famous student.

As Aristotle might say, players become habituated by their play.  This includes not just the skills of play but also the moral aspects of what is experienced in play. This, no doubt, is weaker than the influence of the habituation afforded by the real world. But to say that D&D games with moral components have no habituating influence is analogous to saying that video games with hand-eye coordination components have no habituating impact on hand-eye coordination beyond video games. One would have to assert players learn nothing from their hours of play, which seems unlikely.

I am not claiming that D&D takes control of the players in a Mazes and Monsters scenario, just that experiences shape how we perceive and act, something that is obviously true. So, I do not think that people who play in D&D games casting orcs and drow as monstrous and even those that mirror real world racism would make players into white supremacists. Rather, I agree with the obvious claim: our experiences influence us and getting comfortable with fictional racism makes it slightly easier to get comfortable with real world racism.

For those who prefer Kant, one could also advance a Kantian style argument: it does not matter whether the in-game racism that mirrors real world racism has an impact on people’s actions or not, what matters is whether such racism is wrong or right in and of itself. If racism is wrong, then even fictional racism would thus be wrong.

As someone who regularly games, I can see the obvious danger in the arguments I have just advanced: would not the same arguments apply to a core aspect of D&D, namely the use of violence? I will address these matters in the next essay.

During the COVID-19 Pandemic, Leon County in my adopted state of Florida mandated the wearing of face coverings in indoor, public spaces. There were numerous exceptions, such as while exercising (at a distance) and for health reasons. Those who violate the ordinance faced an initial $50 fine which increased to $125 and then up to $250. As would be expected, this ordinance was met with some resistance. Some even claimed that the mask mandate was tyranny.

While discussing the tyranny of the mask during COVID-19 has some historical value, there is also the general issue of whether such health focused mandates are tyrannical. After all, it is just a matter of time before the next pandemic and the state might impose mandates intended to keep people safe. Or it might not, depending on who is in charge.  

One challenge is agreeing on a serious definition of “tyranny” beyond “something I don’t like.” Since American political philosophy is based heavily on John Locke, he is my go-to for defining the term.

Locke takes tyranny to be the “exercise of power beyond right.” For him, the right use of power is for the good of the citizens and a leader’s use of power for “his own private separate advantage” is exercising that power “beyond right.” Locke also presents some other key points about tyranny, noting that it occurs when “the governor, however entitled:

 

  • Makes his will and not the law the rule
  • Does not direct his commands and actions to the preservation of the properties of his people.
  • Directs them to the satisfaction of his own ambition, revenge, covetousness, or any other irregular passion.”

 

Did the ordinance, and similar impositions, meet the definition? On the face of it, it did not. After all, the aim of the ordinance seemed to be for the good of the citizens: it was aimed at reducing the chances of infection. It was also aimed at allowing businesses and other public places to remain. That is, it was aimed at the preservation of the properties of the people. There is no evidence that those in office used the ordinance for their “own private separate advantage” or were trying to satisfy some “irregular passion.”

It could be argued that while the objectives of the ordinance were not tyrannical, the ordinance involved exercising power “beyond right.” That is, the ordinance overstepped the legitimate limits of the power of the governing body. Since I am not a lawyer, I will focus on the moral aspect: do authorities have the moral right to impose a mask requirement or similar health measure on the people?

While people tend to answer in terms of their likes and dislikes, I will follow J.S. Mill and use principles I consistently apply in cases of liberty versus safety. As in all such cases, my first area of inquiry is into the effectiveness of the proposed safety measures. After all, if we are giving up liberty to gain nothing, this would be both foolish and wrong.

While there is some debate over the effectiveness of masks, the consensus of experts is that they do help prevent the spread of the virus. There is also the intuitively plausible argument that face coverings reduce the spread of the virus because they reduce the volume and distance of expulsions. They also block some of what is incoming. Medical professionals have long used these masks for these reasons. In future pandemics, we will also need to evaluate the effectiveness of proposed measures in good faith.

But wearing a mask is not without its costs. Aside from the cost of buying or making masks, they are uncomfortable to wear, they interfere with conversations, and it is hard to look good in a mask. While breathing does require a tiny bit more effort, this is generally not a significant factor for most. Those with pre-existing conditions impacting their breathing are more likely to be severely impacted by COVID-19—but they will need to rationally weigh the relative risks. Anecdotally, I did not find the masks problematic for normal wear, but I used to run wearing a face mask during the Maine winters to keep my face from freezing. That said, the “paper” masks were uncomfortable to wear when they were soaked with sweat, but I was almost always able to rely on distancing while running.

Weighing the effectiveness of the masks against the harm, they seem to have had a decisive safety advantage: by enduring some minor discomfort for short periods of time you could reduce your risk of being infected with a potentially lethal disease. You also reduced the risk of infecting others. Again, whatever measures are proposed during the next pandemic will also need to be assessed in this way.

The second issue to address is whether the gain in safety warrants the imposition on liberty. After all, some people did not want to wear masks, and it is an imposition to require this under the threat of punishment. My go to guide on this is the principle of harm presented by J.S. Mill.

Mill contends that “the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others.” I will rely on Mill’s arguments for his principle but agree it can be criticized in favor of alternative principles.

During the discussion of his principle Mill argues that we (collectively) have no right to infringe on a person’s liberty just because doing so would be good for them or even to prevent them from harming themselves. As long as their actions impact only themselves, their liberty is absolute. Applying this to the masks, if they only served to protect a person from infection, then Mill’s principle would forbid their imposition: people have the liberty of self-harm. If this had been true, I would have agreed with those who saw masks as tyranny: they have the moral right to put themselves at risk if doing so does not harm others. As they say, their body, their choice.

To use an analogy, If I want to go shooting without wearing any eye protection (and I have medical insurance), I have the right to be stupid and risk losing an eye. But the masks do more than protect the wearer; they also protect other people. If I go out without a mask and I am unaware I am infected, I am putting other people in greater danger—I am potentially harming them. As such, it is no longer just my business, it is their business as well.

Going back to the gun analogy, I do not have a right to just shoot my gun around whenever and wherever I want since doing so puts other people at risk of injury and death. I can be rightfully prevented from doing this. To use another analogy, while I think a person has the moral right to turn off their airbag in their car and face a greater risk of injury or death, they do not have the right to remove their brakes since that would put everyone in danger.

The obvious conclusion is that the imposition of masks was not tyranny. In fact, it is an excellent example of how the state should exercise its power: for the protection of the citizens based on the best available evidence. When the next pandemic arrives, the same approach should be taken. Assuming that the government tries to do anything to address it.

In the previous essay I discussed guilt by association. Not surprisingly, there is an equal but opposite temptation: to refuse to acknowledge bad elements in groups one likes. Giving in to this temptation can result in committing a version of the purity fallacy which could be called the Denial of Association.

This version of the fallacy occurs when a negative claim about a group based on certain members is rejected by asserting, without adequate support, that the alleged members are not true members of the group. This fallacy is also known as the No True Scotsman fallacy thanks to the philosopher Anthony Flew. For example, if a 2nd Amendment rights group is accused of being racist, they might say that those displaying racist symbols at their events were not real members. This version of the fallacy has the following form:

 

Premise 1: Negative claim P has been made about group G based on M members of G.

Premise 2: It is claimed, without support, that the members of M are not true members of G.

Conclusion: Claim P is false.

 

This reasoning is fallacious because simply asserting that problematic alleged members are not true members does not prove that the claim is not true about the group. As always, it is important to remember that fallacious reasoning does not entail that the conclusion is false. A group’s defender could commit this fallacy while their conclusion is correct; they would have simply failed to give a good reason to accept their claim.

Like many fallacies, it draws its persuasive power from psychological factors. Someone who has a positive view of the group has a psychological, but not logical, reason to reject the negative claim. Few are willing to believe negative things about groups they like or identify with. In Flew’s original example, a Scotsman refuses to believe a story about the bad behavior of other Scotsmen on the grounds that no true Scotsman would do such things. People can also reject the claim on pragmatic grounds, such as when doing so would provide a political advantage.

The main defense against this fallacy is to consider whether the negative claim is rejected on principled grounds or is rejected without evidence, such as on psychological or pragmatic grounds. One way to try to overcome a psychological bias is to ask what evidence exists to reject the counterexample. If there is no such evidence, then all that would be left are psychological or pragmatic reasons, which have no logical weight.

Sorting out who or what belongs in a group can be a matter of substantial debate. For example, when people displaying racist symbols show up at gun rights events or protests the question arises as to whether the protesters should be regarded, in general, as racist. Some might contend those openly displaying racist symbols should not define the broader group of protesters. Others contend that by tolerating the display of racist symbols the general group shows that it is racist. As another example, those peacefully protesting police violence generally disavow those who engage in violence and vandalism and claim that the violent protesters do not define their group. Others contend that because violence and looting sometimes occurs adjacent to or after peaceful protests, the protesters are violent looters. College students peacefully protesting Israel’s actions contend that they are not antisemitic and disavow antisemitism, but their right-wing critics claim they are antisemitic. In some cases, there are actual antisemites involved. In other cases, merely criticizing Israel is cast as antisemitic.

Debates over group membership need not be fallacious. If a principled argument is given to support the exclusion, then this fallacy is not committed. For example, if a fictional 2nd amendment rights organization “Anti-Racists for Gun Rights” (ARGR) was accused of being racist because people at their protest displayed racist symbols, showing that none of the racists were members of ARGR would not commit this fallacy.

As another example, if peaceful protesters show that those who engaged in violence and looting are not part of their group, then it would not be fallacious for them to reject the claim that they are violent on the grounds that those committing the violence are not in their group. As a third example, if college students peacefully protesting Israel show that the people shouting antisemitic slogans at the protest were neo-Nazis from off campus, then they would not be committing this fallacy.

Sorting out which people belong to a group and how the group should be defined can be challenging; but should be done in a principled way. To define a group by the worst of those associated with it runs the risk of committing the guilt by association fallacy. Denying that problematic members are not true members of a group runs the risk of committing the denial of association fallacy. While both fallacies are psychologically appealing and can be highly effective means of persuasion, they have no merit as arguments.

As a practical matter, the unprincipled use both fallacies in efforts to advance their goals in bad faith. After all, what matters to them is “winning” rather than what is true and good.

It is tempting to define a group you do not like by the worst people associated with it, but this can lead to committing the fallacy of guilt by association. To illustrate, conservative protests sometimes include people openly displaying racist symbols and this can lead leftists to conclude that all the protestors are racists. As another example, protests against Israel’s actions sometimes include people who make antisemitic statements, and this leads some people to categorize the protests as antisemitic. While this is often done in bad faith, people can sincerely make unwarranted inferences about protests from the worst people present.

Since people generally do not make their reasoning clear, it often must be reconstructed. One possible line of bad reasoning is the use of a hasty generalization. A hasty generalization occurs when a person draws a conclusion about a population based on a sample that is not large enough to adequately support the concussion. It has the following form:

 

Premise 1: Sample S (which is too small) is taken from population P.

Premise 2: In Sample S X% of the observed A’s are B’s.

Conclusion: X% of all A’s are B’s in Population P.

 

This is a fallacy because the sample is too small to warrant the inference. In the case of the protesters, inferring that most conservative protesters are racists based on some of them displaying racist symbols would be an error. Likewise, inferring that most people protesting Israel are antisemitic because some of them say antisemitic things would also be an error. At this point it is likely that someone is thinking that even if most conservative protesters are not open racists, they associate with them—thus warranting the inference that they are also guilty. Likewise, someone is probably thinking that people protesting Israel are guilty of antisemitism because of their association with antisemites. This leads us to the guilt by association fallacy.

The guilt by association fallacy has many variations but this version occurs when it is inferred that a group or individual has bad qualities because of their  (alleged) association with groups or individuals who have those qualities. The form of the fallacy is this:

 

Premise 1: Group or person A is associated with group or person B

Premise 2: Group or person B has (bad) qualities P, Q, R.

Conclusion: Group A has (bad) qualities P, Q, R.

 

The error is that the only evidence offered is the (alleged) association between the two. What is wanting is an adequate connection that justifies the inference. In the conservative protester example, the protesters might be associated with protesters displaying racist symbols, but this is not enough to warrant the conclusion that they are racists. More is needed than a mere association. The more is, as one would imagine, a matter of considerable debate: those who loath conservatives will tend to accept relatively weak evidence as supporting their biased view; those who like the protesters might be blind even to the strongest evidence. Likewise for people protesting Israel. But whatever standards are used to judge association, they must be applied consistently—whether one loathes or loves the group or person.

As noted above, people who have protested Israel have been accused of association with antisemites. But the same standards applied to conservative protesters need to be applied: to infer that because some protesters have been observed to be antisemitic then most (or all) are as well would commit the hasty generalization fallacy. Naturally, if there is evidence showing that most conservative protesters are racist or evidence showing that most (or all) people who protest Israel are antisemitic, then the fallacy would not be committed.

To infer that those protesting Israel are antisemitic because some associated with the protests are antisemitic would commit the guilt by association fallacy, just as the fallacy would be committed if one inferred that conservative protesters are racists because they are associated with racists. Obviously, if there is adequate evidence supporting these claims, then the fallacy would not be committed.