Cuphead on SteamDirect

Inuendo Studios presents an excellent and approachable analysis of the infamous Gamer Gate and its role in later digital radicalization. This video inspired me to think about manufactured outrage, which reminded me of the fake outrage over such video games as Cuphead and Doom. There was also similar rage against the She-Ra and He-Man reboots. Mainstream fictional outrage against fiction involved the Republican’s rage against Dr. Seuss being “cancelled.” Unfortunately, fictional outrage can lead to real consequences, such as death threats, doxing, swatting, and harassment. In politics fictional outrage is weaponized for political gain, widens the political divide between Americans, and escalates emotions. In short, fictional outrage at fiction makes reality worse. 

I call this fictional outrage at fiction for two reasons. The first is that the outrage is fictional: it is manufactured and based on untruths. The second is that the outrage is at works of fiction, such as games, TV shows, movies, and books. Since Thought Slime, Innuendo Studios, Shaun, and others have ably gone through examples in detail, I will focus on some of the rhetorical and fallacious methods used in fictional outrage at fiction. These methods also apply to non-fiction targets as well, but I am mainly interested in fiction here. Part of my motivation is to show that some people put energy into enraging others about make-believe things like games and TV shows. While fiction is subject to moral evaluation, it should be remembered that it is fiction. Although our good dead friend Plato would certainly take issue with my view.

While someone can generate fictional outrage by complete lies, this is usually less effective than using some residue of truth. Hyperbole is an effective tool for this task. Hyperbole is usually distinguished from outright lying because hyperbole is an exaggeration rather than a complete fabrication. For example, if someone says they caught a huge fish they would be simply lying if they caught nothing but would be using hyperbole if they caught a small fish. There can be debate over what is hyperbole and what is simply a lie. For example, when the Dr. Seuss estate decided to stop publishing six books, the Republicans and their allies claimed Dr. Seuss had been cancelled by the left. While it was true that six books would not be published, it can be argued whether saying the left cancelled them is hyperbole or simply a lie. Either way, of course, the claim is not true.

   Even if the target audience knows hyperbole is being used, it can still influence their emotions, especially if they want to believe. So, even if someone recognizes that the “wrongdoing” of a games journalist was absurdly exaggerated, they might still go along with the outrage. A person who is particularly energetic and dramatic in their hyperbole can also use their showmanship to augment its impact.

The defense against hyperbole is, obviously, to determine the truth of the matter. One should always be suspicious of claims that seem extreme or exaggerated, although they should not be automatically dismissed as extreme claims can be true. Especially since we live in a time of extremes.

A common fallacy used in fictional outrage is the Straw Man. This fallacy is committed when someone ignores an actual position, claim or action and substitutes a distorted, exaggerated, or misrepresented version of it. This fallacy often involves hyperbole. This sort of “reasoning” has the following pattern:

 

  1. Person A has position X/makes claim X/did X.
  2. Person B presents Y (which is a distorted version of X).
  3. Person B attacks Y.
  4. Therefore, X is false/incorrect/flawed/wrong.

 

This sort of “reasoning” is fallacious because attacking a distorted version of something does not constitute an attack on the thing itself. One might as well expect an attack on a drawing of a person to physically harm the person. To illustrate the way the fallacy is often used, consider what happened to start the “outrage” over Cuphead. A writer played an early version of the game badly, noted that they were doing badly, and were generally positive about the game. All this was ignored by those wanting to manufacture rage: they presented it as a game journalist condemning the game for being too hard because they are bad at games. And it escalated from there.  

The Straw Man fallacy is an excellent way to manufacture rage; one can simply create whatever custom villain they wish by distorting reality. As with hyperbole, there is the question of what distinguishes a straw man from a complete fabrication; the difference is that the Straw Man fallacy starts with some truth and then distorts it. To use the Cuphead example, if a person had never even played Cuphead or said anything about it, saying that they hated the game because they are incompetent would be a complete fabrication rather than a straw man.

Straw Man attacks tend to work because people generally do not bother to investigate the accuracy of claims they want to believe; and even if they are not already invested in the claim, checking a claim takes some effort. It is easier to just believe (or not) without checking. People also often expect others to be truthful, which is increasingly unwise.

The defense against a Straw Man is to check the facts. Ideally this would involve going to the original source or at least using a credible and objective source.

A third common fallacy used in fictional outrage is Hasty Generalization. This fallacy is committed when a person draws a conclusion about a population based on a sample that is not large enough. It has the following form:

 

Premise 1. Sample S, which is too small, is taken from population P.

Conclusion: Claim C is drawn about Population P based on S.

 

The person committing fallacy is misusing the following type of reasoning, which is known as Inductive Generalization, Generalization, and Statistical Generalization:

 

Premise 1: X% of all observed A’s are B’s.

Premise : Therefore X% of all A’s are B’s.

 

The fallacy is committed when not enough A’s are observed to warrant the conclusion. If enough A’s are observed, then the reasoning is not fallacious. Since Hasty Generalization is committed when the sample (the observed instances) is too small, it is important to have samples that are large enough when making a generalization.

This fallacy is useful in creating fictional outrage because it enables a person to (fallaciously) claim that something is widespread based on a small sample. If the sample is extremely small and it is a matter of an anecdote, then a similar fallacy, Anecdotal Evidence, can be committed. This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy is also committed when someone rejects reasonable statistical data supporting a claim in favor of a single example or small number of examples that go against the claim. The fallacy is considered by some to be a variation of Hasty Generalization.  It has the following forms:

 

Form One

Premise 1: Anecdote A is told about a member (or small number of members) of Population P.

Conclusion: Claim C is made about Population P based on Anecdote A.

 

Form Two

  1. Reasonable statistical evidence S exists for general claim C.
  2. Anecdote A is presented that is an exception to or goes against general claim C.
  3. Conclusion: General claim C is rejected.

 

People often fall victim to this fallacy because stories and anecdotes have more psychological influence than statistical data. This leads people to infer that what is true in an anecdote must be true of the whole population or that an anecdote justifies rejecting statistical evidence. Not surprisingly, people usually accept this fallacy because they prefer that what is true in the anecdote be true in general. For example, if one game journalist is critical of a game because it has sexist content, then one might generate outrage by claiming that all game journalists are attacking all games for sexist content.

A person can also combine rhetorical tools and fallacies. For example, an outrage merchant could use hyperbole to create a straw man of an author who wrote a piece about whether video game characters should be more diverse and less stereotypical. The straw man could be something like this author wants to eliminate white men from video games and replace them with women and minorities. This straw man could then be used in the fallacy of Anecdotal Evidence to “support” the claim that “the left” wants to eliminate white men from video games and replace them with women and minorities.

The defense against Hasty Generalization and Anecdotal Evidence is to check to see if the sample size warrants the conclusion being drawn. One way that people try to protect their claims from such scrutiny is to use an anonymous enemy. This is done by not identifying their sample’s members but referring to a vague group such as “those people”, “the left”, “SJWs”, “soy boys”, “the woke mob”, or whatever. If pressed for specific examples that can be checked, a common tactic is to refer to someone who has been targeted by a straw man fallacy and just use Anecdotal Evidence again. Another common “defense” is to respond with anger and simply insist that there are many examples, while never providing them. Another tactic used here is Headlining.

In this context, Headlining occurs when someone looks at the headline of an article and then speculates or lies about the content. These misused headlines are often used as “evidence”, especially to “support” straw man claims. For example, an article might be entitled “Diversity and Inclusion in Video Games: A Noble Goal.” The article could be a reasoned and balanced piece on the merits and cons of diversity and inclusion in video games. But the person who “headlines” it (perhaps by linking to it in a video or including just a screen shot) could say that the piece is a hateful screed about eliminating white men from video games. This can be effective for the same reason that the standard Straw Man is effective; few people will bother to read the article. Those who already feel the outrage will almost certainly not bother to check; they will simply assume the content is as claimed (or perhaps not care).

There are many other ways to create fictional outrage at fiction, but I hope this is useful in increasing your defense against such tactics.

In the last pandemic Americans were caught up in a political battle over masks. Those who opposed mask mandates tended to be on the right; those who accepted mask mandates (and wearing masks) tended to be on the left. One interesting approach to the mask debate by some on the right was to draw an analogy between the mask mandate and the restrictive voting laws that the Republicans have passed. The gist is that if the left opposed the voting laws, then they should have opposed mask mandates. Before getting into the details of the argument, let us look at the general form of the analogical argument.

An analogy will typically have two premises and a conclusion. The first premise establishes the analogy by showing that the things (X and Y) in question are similar in certain respects (properties P, Q, R, etc.).  The second premise establishes that X has an additional quality, Z. The conclusion asserts that Y has property or feature Z as well. The form of the argument looks like this:

 

           Premise 1: X and Y have properties P, Q, R.

           Premise 2: X has property Z.

           Conclusion: Y has property Z.

 

X and Y are variables that stand for whatever is being compared, such as chimpanzees and humans or apples and oranges. P, Q, R, and are also variables, but they stand for properties or features that X and Y are known to possess, such as having a heart. Z is also a variable, and it stands for the property or feature that X is known to possess. The use of P, Q, and R is just for the sake of illustration-the things being compared might have more properties in common.

One simplified way to present the anti-mask mandate analogy is as follows:

 

Premise 1: Mask mandates and restrictive voting laws are similar in many ways.

           Premise 2: Restrictive voting laws are opposed by the left.

           Conclusion: Mask mandates should also be opposed by the left.

 

While this analogy seems appealing to many anti-mask mandate folks, we must see if it is a strong argument. The strength of an analogical argument depends on three factors. To the extent that an analogical argument meets these standards it is a strong argument.

First, the more properties X and Y have in common, the better the argument. This standard is based on the notion that the more two things are alike in other ways, the more likely it is that they will be alike in some other way. Second, the more relevant the shared properties are to property Z, the stronger the argument. A specific property, for example P, is relevant to property Z if the presence or absence of P affects the likelihood that Z will be present. Third, it must be determined whether X and Y have relevant dissimilarities as well as similarities. The more dissimilarities and the more relevant they are, the weaker the argument. So, is the analogy between the restrictive voter laws and mask mandates strong? To avoid begging the question by making a straw man, I will endeavor to make the best analogy I can—within the limits of truth.

On the face of it, both the mask mandate and the restrictive voting laws are aimed at preventing harm. In the case of the mask mandate, the coercive power of the state was to be used to protect people from the pandemic. In the case of the restrictive voting laws, the coercive power of the state was supposed to protect people from voter fraud. This seems an essential and compelling similarity if the state has the right to use its coercive power to protect citizens from harm. As such, if the use of mask mandates was warranted to protect people from the danger of COVID, then the use of restrictive voting laws would be warranted to protect people from the danger of voter fraud. However, further consideration reveals that the analogy fails.

One critical relevant difference is that the number of COVID cases (and deaths) in the United States is huge, while the number of cases of voter (or election) fraud is vanishingly small. The last pandemic killed 1,193,165 people in the United Stated.  In contrast, there were about 16 charged cases of voter fraud in the 2020 Presidential election. In some cases, these charges stem from absurd efforts, such as a person who voted and asked if he could vote for his son. When told he could not, he returned and attempted to impersonate his son. . One could attempt to address this by providing evidence that fraud is widespread and pervasive enough to warrant the comparison to COVID. Alternatively, one could argue that few people actually died of COVID, so the number is small enough to warrant the comparison to voter fraud. But there is no evidence for either: COVID killed almost 2 million people in the United States and voter fraud barely exists.

The level of risk is also important in determining if the use of coercive power of the state is warranted. If an illness arose that only infected 16 people, then it would be absurd for the state to use its power to impose mask mandates. The level of threat would not warrant such an imposition. Naturally, if there were millions of cases of voter fraud, then the state would be warranted in acting as that would be a serious threat.

Another important difference is the severity of the harm. COVID killed almost 2 million people in the United States (and is still killing people). Voter fraud had no meaningful impact on the 2020 election and all efforts on the part of Trump and his supporters have failed repeatedly to reveal evidence to the contrary. Mike Lindell, the My Pillow Guy, exemplifies the bizarre absurdity of the conspiracy theory about voter/election fraud. One could attempt to address this by providing evidence that voter/election fraud is harmful enough to warrant the comparison to COVID. Alternatively, one could argue that the danger of COVID was grossly exaggerated and thus warrant the comparison to voter fraud. But there is no evidence for either: COVID killed millions and the 16 charged cases of voter fraud had no effect on the outcome of the election.

Another relevant difference is the effectiveness of the measures. Justifying the use of the coercive power of the state requires showing that this use is effective in addressing the harm. To compel people and produce no benefit would be a failure on the part of the state. While masks were not magical armor against COVID, they do provide a meaningful level of protection when used correctly. In contrast, the restrictive laws the Republicans passed do not seem to have any relevance to preventing fraud (which barely exists). For example, even Lindsey Graham admitted that the law preventing voters from receiving food and water while waiting in line does not make ‘a whole lot of sense.’ One could attempt to address this by providing evidence that the laws are effective at preventing fraud and so warrant the comparison to mask mandates. Alternatively, one could argue that the effectiveness of masks was grossly exaggerated and thus warrant the comparison. But there is no evidence for either: masks seemed moderately effective while the voter restrictions were not. Given this evaluation, the analogy is weak. Interestingly, the analogy would be a problem for people who are anti-mandate but pro voter restriction.

There are other relevant differences as well, such as voting being a foundational right that should be protected rather than restricted and being allowed to freely infect people during a pandemic is not a right and masks were barely an inconvenience.

If we do accept that mask mandates and voter restriction laws are analogous and it is assumed that the laws are warranted because they are aimed at addressing a harm that barely exists, then it would follow that mask-mandates were justified. After all, the mandates were aimed at addressing a disease that killed almost two million Americans. Alternatively, if one accepts that the mask mandate is unwarranted despite the harms of illness and death, then it would follow (by analogy) that the restrictive voting laws are utterly unwarranted since they impose great restrictions to (allegedly) address a harm that barely exists. But this assessment will have little impact: the comparison being made seemed to be mostly made in bad faith for rhetorical purposes rather than being a well-considered argument based on a theory of when the state should coerce its citizens. 

While Republican politicians in my adopted state of Florida profess to love freedom, they have been busy passing laws to restrict freedom. During the last pandemic Governor DeSantis opposed mask mandates and vaccine passports on the professed grounds of fighting “medical authoritarianism.” He also engaged in the usual Republican attacks on cancel culture, claiming to be a supporter of free speech. However, the Governor and the Republican dominated state legislature banned ‘critical race theory’ from public schools, mandated a survey of the political beliefs of faculty and students, and engaged in book banning. On the face of it, the freedom loving Republicans appear to be waging war on freedom. One could spend hours presenting examples of the apparent inconsistencies between Republican value claims and their actions, but my focus here is on value vagueness.

In my ethics class, I teach a section on moral methods which are argument templates for ethical reasoning. One method, which is useful beyond ethics, is Logical Consistency. Two claims are logically consistent with each other when both can be true at the same time. For example, the claim “restricting freedom is sometimes acceptable” is consistent with the claim “restricting freedom is sometimes unacceptable” since they can both be true.  Two claims are inconsistent when both cannot be true at the same time (but both could be false). For example, the claim “people should be free from government control” would seem to be inconsistent with the claim “the government should ban the teaching of critical race theory.”  This is because while these claims cannot both be true at the same time, they could both be false.

If someone makes inconsistent claims, then at least one of their claims must be false. The fact that two (or more) claims are inconsistent does not, however, show which is false. But you can know that a set of claims contains inconsistent claims without knowing which ones are false. Since logically inconsistent claims cannot be true at the same time, it is irrational to accept such claims when their inconsistency is known. But there is a way to respond, rationally, to a seemingly reasonable charge of inconsistency.

In some cases, it is possible to respond to the charge of inconsistency by dissolving the inconsistency. This can be done by showing that the inconsistency is merely apparent. This is achieved by arguing that the claims are consistent despite appearing inconsistent.

In the case of value claims, such as claims about political or moral matters, an inconsistency can seem to occur because of how the person making the charge defines a term or phrase. Their definition can be different from that used by the person making the claim. In some cases, this difference can be the result of bad faith, but people can disagree about definitions in good faith.  The concept of freedom is an excellent example of this: people have different definitions of this concept, and the definition is relevant to sorting out a charge of inconsistency about freedom.

Those who read my work know that I often accuse Republicans of being inconsistent. But they could be defended by showing that under their conception of freedom, they are consistent. For example, the same Republicans who rage against “cancel culture” and lost their minds over Dr. Seuss and Mr. Potato Head are the same people who passed laws cancelling the teaching of (what they claim is) critical race theory. They are also the people who professed outrage when an athlete protested police violence during the national anthem. On the face of it, they seem to engage in inconsistent claims: people should be free to express their views, but people should be forbidden from teaching critical race theory and condemned for protesting police violence during the national anthem.  But there is an easy way to respond to this charge in a sensible manner.

The concept of freedom is vague and saying one supports freedom is to make a vague claim. Outside of a philosophical analysis of “freedom” this is a normal and sensible thing to do: if you spent the time precisely defining your concept of freedom during a speech or conversation, your audience would fall asleep if they could not flee. When a person is pressed on their view, then that is the time to be more precise about their concept. For example, I also say that I am for freedom of expression. But if I were asked if I thought Ted Cruz should be free to shout death threats at Mike Pence, I would say that he should not do that. If someone attacked me for this seeming inconsistency, I would contend that my account of freedom of expression does not make freedom absolute and there are limits to freedom. I would, as always, use the stock argument about the role of harm in limiting freedoms and point out Hobbes’ realization that a right to everything amounts to a right to nothing.

Republicans can do the same and argue that while people should be free to decline masks and vaccinations during a pandemic, they should not be free to discuss critical race theory in class or protest police violence during the national anthem. They would, however, need to show how these are consistent under their theory of freedom. On the face of it, this would be difficult. Take, for example, the usual use of the principle of harm: while this allows me to be against Cruz making death threats against Mike Pence, it would not seem to warrant the freedom to go unvaccinated during a pandemic. Yet it would seem to allow people to teach critical race theory and protest. So, they would need some other means of justifying the different applications. A plausible approach is to use the principle of relevant difference.

If there are relevant differences between the cases, then this warrants a difference in application of the concept of freedom. Common differences include who is taking the action, that nature of the action, and the consequences of the action. In my Cruz example, above, I can appeal to a relevant difference in terms of the harmful consequences of allowing people to make such threats. Republicans could contend that who is acting is relevant; when Republicans are accused of being racist and sexist, they could see freedom as something that belongs to white men. They could also focus on the action: kneeling in protest is different from going without a mask during a pandemic (the difference could be that one energizes their base and the other enrages their base). The challenge is showing how this is a relevant difference that warrants the difference in freedom.

It is also worth noting that while value concepts are vague (until clarified), this vagueness can be exploited for rhetorical purposes. The general strategy is to use a value term (or phrase) vaguely to make the target audience feel positive (or negative). Since audience members will generally use their own definition for a concept, this can be very effective: the audience will often assume that they all have the same view of the concept.

 Value concepts that are seen as positive can be very effective in this role. “Freedom” is very popular in the United States, so politicians talk endlessly about it. It is a vague concept, so it can be applied broadly and inconsistently. So, for example, Republicans in Florida talked about fighting mask mandates because they love freedom. They also pass laws restricting freedoms, counting on the fact that “freedom” is vague. The defense against being swayed by this rhetoric is to determine what the concept really seems to mean (if anything) to the person using it. In the case of Republicans in Florida, their conception of freedom has very strict limitations that seem to be defined by such factors as race, class, gender and religion. But, of course, this need not be a problem for them: if their base has a similar conception, they can seem to be speaking virtuously about freedom while acting against freedom.

As a philosopher, I annoy people in many ways. One is that I almost always qualify the claims I make. This is not to weasel (weakening a claim to protect it from criticism) but because I am aware of my epistemic limitations: as Socrates said, I know that I know nothing. People often prefer claims made with certainty and see expressions of doubt as signs of weakness. Another way I annoy people is by presenting alternatives to my views and providing reasons as to why they might be right. This has a downside of complicating things and can be confusing. Because of these, people often ask me “what do you really believe!?!” I then annoy the person more by noting what I think is probably true but also insisting I can always be wrong. This is for the obvious reason that I can always be wrong. I also annoy people by adjusting my views based on credible changes in available evidence. This really annoys people: one is supposed to stick to one view and adjust the evidence to suit the belief. The origin story of COVID-19 provides an excellent example for discussing this sort of thing.

When COVID first appeared in China, speculation about its origin began and people often combined distinct claims without considering they need not be combined. One set of claims is the origin of COVID. Some claims are that it is either naturally occurring or was engineered in the lab. At this point, the best explanation is that the virus is naturally occurring. But since humans do engineer viruses, it is possible the virus was engineered. The obvious challenge is to provide proof and merely asserting it is not enough. So, at this point my annoying position is that the best evidence is that the virus is naturally occurring, but new evidence could change my position.

Other claims are about the origin of the infection. Some claim it entered the human population through a wet market. Some claim it arrived via some other human-bat interactions. There is also the claim that it originated from a lab. All of these are plausible. We know diseases can originate in markets and spread. We know that labs are run by people and people make mistakes and can be sloppy at work. We know humans interact with animals and disease can spread this way.

Back at the start of the last pandemic, I favored the wet market hypothesis because it seemed  best supported by the available evidence. Diseases do jump from livestock to humans, so this claim was plausible. However, the possibility that the virus leaked from the lab has gained credibility. While there is not yet decisive evidence, this hypothesis is credible enough to warrant serious investigation. I do not have a vested interest in backing any particular hypothesis.

There are also claims about whether it was intentional. Some it was an accident. Some claim the virus was intentionally introduced, and nefarious reasons vary between the hypotheses. Accidents are regular occurrences and things are always going wrong. But people intentionally do evil and have various reasons for doing so, ranging from making money, to getting more power, to seeking revenge, to all the other reasons people do bad things. As it now stands, there is little or no evidence that a malign actor intentionally introduced the virus into the population. But evidence could certainly arise. People have done worse things. The malign actor hypothesis is also an umbrella: one must select specific evildoers as the culprit, though there could be many. As always, evidence is needed to support any claims.

It is important to distinguish between the different claims and to keep in mind that evidence that supports one claim might not support another claim often associated with it.

A common mistake is confusing how conjunctions work with how disjunctions work. In logic, a disjunction is an “or” claim which is true when one or both disjuncts is/are true. For example, if I say that I will bring beer or tequila to the party, then my claim is true unless I show up with neither. Showing up with one or the other or both makes that disjunction true.

In the case of a conjunction, both conjuncts must be true for the statement to be true. So, If I say I will bring hot dogs and buns to the party, then I must show up with both for my claim to be true. While it might seem like an odd and obvious mistake, people can treat a conjunction like a disjunction when they want to claim the conjunction is true. In some cases, people will do this intentionally in bad faith. This has been done in the case of COVID.

As noted above, the lab leak hypothesis for COVID has gained credibility. Because of this, some might conclude the virus was also manufactured. The person could think that because there is reason to believe the virus leaked from a lab, then it is also true that it was manufactured. If it is true that the virus was leaked, then one part of the claim “the virus was manufactured and leaked” would be true, namely that it was manufactured. So, someone might be tempted to take the entire claim as true (or make the claim in bad faith). After all, if it were true that the virus was leaked, then it would be true that it was leaked or it was manufactured. But this would be a matter of logic; it would thus also be true that the virus was leaked, or unicorns exist. As always, it is important to determine which part of a conjunction is supported by the evidence. If both claims are not supported, then you do not have good reason to accept the conjunction as true. The last annoying thing I will look at is the fact that being right does not mean a person was justified.

Suppose tomorrow brings irrefutable proof the virus was leaked from a lab. Those devoted to this claim would probably take this as proof they were right all along. On the one hand, they would be correct: they were right all along, and other people were wrong. But since at least Plato philosophers have distinguished between having a true belief and having justification for this belief. After all, one can be right for bad reasons, such as guessing or from prejudice. For example, a person who likes horror-sci fi might believe the lab leak because they like that narrative. As another example, a racist might accept the lab leak hypothesis because of their prejudices. A nationalist might go with the lab leak because they think China is an inferior country. And so on. But believing on these grounds would not justify the belief; they would have just gotten lucky. As such, their being right would be just a matter of luck—they guessed right based on bad reasons.

One thing people often find confusing about critical thinking and science is that a person can initially be justified in a belief that ultimately turns out false. This is because initial evidence can sometimes warrant belief in claims that are later disproved. In such cases, a person would be wrong but would have all the right reasons to believe. Some of this is because of the problem of induction (with inductive reasoning, the conclusion can always turn out to be false) and some of it is because humans have limited and flawed epistemic abilities.

People who do not understand this will tend to think these good methods are defective because they do not always get the truth immediately and they do not grasp that a person can be reasoning well but still end up being wrong. Such people often embrace methods of belief formation that are incredibly unreliable, such as following authoritarian leaders or unqualified celebrities.  If the evidence does turn out to eventually support these initially unjustified beliefs, they do not seem to get that this is how the process works: false claims, one hopes, eventually get shown to be false and better supported claims replace them. As such, those who rejected the lab hypothesis earlier because of the lack of evidence but are now considering it based on the new evidence are doing things right. They are adjusting based on the evidence. I suspect that some approach belief in claims like they might see belief in religion: you pick one and stick with it and if you luck out, then you win. But that is not how rational belief formation works.

What, then, about someone who believed in the lab hypothesis early on and was rational about it? Well, to the degree they had good evidence for their claim, then they deserve credit. However, if they believed without adequate justification, then their being correct was a matter of chance and not the result of some special clarity of reason. To close, people should keep advancing plausible alternatives as this is an important function in seeking the truth. So those who kept the lab hypothesis going because they rationally considered it a possible explanation do deserve their due credit.

Long ago, when I was a student, student loans were mostly manageable. Over the years, the cost of college has increased dramatically, and student loans have become increasingly burdensome. There is also the issue of predatorial for-profit schools. Because of this debt burden, there have been proposals to address the student loan problem. Some have even proposed forgiving or cancelling student loans. This proposal has generated hostile responses, although Roxane Gay has advanced some well-reasoned arguments in its defense. I paid my relatively modest loans long ago, so my concern with is a matter of ethics rather than self-interest. In this essay and those to follow I will consider the ethics of student loan forgiveness and provide some logical assessment of various relevant arguments.

As Gay noted in the New York Times, Damon Linker tweeted that “I think Dems are wildly underestimating the intensity of anger college loan cancellation is going to provoke. Those with college debt will be thrilled, of course. But lots and lots of people who didn’t go to college or who worked to pay off their debts? Gonna be bad.” Linker was right. Even if there was not genuine grassroots anger at student loan forgiveness, Republicans and the right-wing media  generated rage against it. But is there any merit to the anger argument?

Put a bit simply, the anger argument against student loan forgiveness is that because federal student loan forgiveness would make many people angry, then it would be incorrect to do it. This is obviously the appeal to anger fallacy; a fallacy in which anger is substituted for evidence when making an argument. Formally, this version of the fallacy looks like this:

 

Premise 1: X would make people angry.

Conclusion: X is wrong or incorrect.

 

This is bad logic because the fact that something makes people angry has no connection to whether it is true or correct. People can be angry about claims that are true and enraged about things that are good. They can, of course, also be angry about claims that are false and enraged about things that are evil. But the anger people feel does not prove (or disprove) falseness or wrongness. A silly example illustrates this:

 

Premise 1: The triangle haters get angry when it is claimed that triangles have three sides.

Conclusion: Triangles do not have three sides. 

 

Somewhat less silly examples are as follows:

 

Premise 1: Some people got angry about the American colonies rebelling. 

Conclusion: The colonies were wrong to rebel.  

 

Premise 1: Some people are angry about evolution. 

Conclusion: Evolution does not occur.

 

Premise 1: Atheists would be angry if God exists.

Conclusion: God does not exist.

 

As these examples show, drawing a conclusion about the truth of a claim or the morality of something from people being angry is bad reasoning. As such, the anger people might feel about student loan forgiveness is irrelevant to whether it is the right thing to do. But perhaps there is a way to make a non-fallacious argument from anger. One way to do this is to switch from concerns about truth and morality to pragmatism. That is, perhaps it could be argued that the anger of some people would provide a practical reason to not have student loan forgiveness.

While this greatly oversimplifies things, pragmatic arguments are aimed at establishing what would be the most prudent or advantageous thing. This is an argument from consequences. The idea is that the correct choice is the one that generates the best consequences for those who matter. While people tend to think the correct choice is the one they think is best for them, working out an appeal to consequences requires arguing to establish who matters and how to assess the value of the consequences. Laying aside all these concerns, pragmatic arguments from anger can easily be made.

To illustrate, imagine that a politician sees the polls show that most voters are angry about student loan forgiveness and this anger is strong enough to influence their vote. From a pragmatic standpoint, the anger of their voters does give them a practical reason to oppose forgiveness: if they want to increase their chances of being re-elected, then they should oppose it. While this could be for selfish reasons (the politician might want to stay in office to keep cashing in on insider trading) it could also be for benevolent reasons (the politician might want to stay in office to try to improve the lives of their constituents). From a pragmatic standpoint responding to the anger could be the prudent or advantageous thing to do. While these pragmatic reasons can be strong motivating factors, they do not prove (or disprove) anything about the rightness or wrongness of student loan forgiveness. But there is still an option for using anger in a non-fallacious moral argument.

Utilitarianism, a view argued for by the likes of Bentham and Mill, is the moral view that the morality of an action depends on the consequences for those who are morally relevant. Put in simple terms, an action that creates more good for those who count would be better than an action that creates less good (or causes harm). Since utilitarian arguments deal with consequences, it is often possible to re-tool a pragmatic consequentialist argument into a moral argument. Here is how it could be done.

Suppose that there is good reason to believe that Linker is right and anger at any student loan cancellation “gonna be bad.” If the harms generated by this anger outweighs the benefit of the loan cancellation when considering all Americans, then the loan cancellation would be wrong. Thus, it would seem that the right sort of appeal to anger can work. But there is an obvious concern about the role of the anger in generating the harms.

If cancelling the loans itself resulted in greater harms than not doing so (such as pulling money from critical social programs), then it would seem right to not cancel them. But the anger argument rests on how people respond to the cancellation, not the harm done by the cancelling itself. That is, the harms in question would arise because of what people do because they are angry in response to the cancellation. This leads to an old ethical debate about how to factor in responses when doing the utilitarian calculation. On the one hand, it does seem reasonable to consider how people will respond when sorting out consequences. On the other hand, there is the obvious problem that people could force a change in the moral calculation by responding in ways that would create harms. That is, they could “rig” the moral argument by threatening to respond with terrible actions.

To use a fictional example, imagine a debate over raising minimum wage in which businesses said they would kill their minimum wage employees, their pets, and their loved ones if the wage was increased. In terms of consequences, this would make increasing the minimum wage extremely harmful and so it would be wrong to increase it. As an alternative fictional example, imagine the much-feared radical leftists threatened to kill business owners, their pets, and their loved ones if the minimum wage is not increased. This would make not increasing it wrong. But there is clearly a problem with assessing the morality of an action based on what the worst people might do in response to that action since this would make morality hostage to the worst people. One fix is to consider the action apart from such efforts to prevent the action by intentionally increasing the harms while also, obviously enough, assessing the ethics of these efforts. So, when considering student loan cancellation there is the moral issue of the consequences of the cancellation itself and there is the distinct moral issue of whether the responses to it would be morally appropriate or not. That is, we need to see if the anger against loan cancellation is morally warranted. If it is not, then the anger might have negative consequences but yielding to that anger would be wrong. In the next essay I will consider the fairness argument, free of anger.

 

On the face of it, the notion of skill transference in education sounds reasonable: if a student learns one skill, such a Latin or geometry, that requires logical thinking, then this skill should transfer to other areas involving logical thinking, such as categorical logic. Surprisingly, it seems these skills do not transfer. There have also been ill-fated attempts to find skills that would boost general intelligence, such as the idea that learning to play an instrument or chess would also make you smarter. So far, this has not worked out. While learning to play chess makes a person better at chess, it does not seem to boost general intelligence.

Because of its perceived value, there have been efforts to teach students critical thinking. At my university this is one of the competencies we assess as part of our assessment of the General Education curriculum. These is, as would be imagined, an assumption that various and diverse general education classes can teach the general skill of critical thinking. My Philosophy and Religion program also has critical thinking as a competency we assess as part of our assessment and there is, once again, an assumption that there is a general skill being taught. Interestingly, the national data and the data from my university shows that students generally do not transfer critical thinking skills. What is extremely interesting is that these skills do not seem to transfer well even within a specific discipline. For example, one might think that taking Critical Inquiry (a critical thinking class) or Logic would confer general critical thinking skills that would be retained an applied in other philosophy classes. But this is generally not the case.

While it is not surprising that very specific skills would not transfer well (for example, learning about metaphysics might not help a student much in ethics) it does seem odd that general critical skills do not transfer very well. Daniel Willingham provides an excellent analysis of this problem.

Willingham presents two excellent examples. One involves the difficultly people have with transferring an understanding of the law of large numbers in the context of randomness (such as dice) to cases such as judging academic performance. That is, a person who gets that rolling a set of dice twice will not tell you whether they are loaded or not might uncritically accept that a person who gets two bad math exam grades must be bad at math. Both scenarios involve the same sort of reasoning (inductive generalizations) but the skill does not seem to transfer between the different applications. If it did, a person who understood the dice situation would also get that a sample of two math tests is too small to support an inference about math skill.

His second example, a classic experiment, involved analogical reasoning. In this example, subjects were asked how a tumor could be treated with a ray that would cause extensive collateral damage. Before being given this problem, the subjects read a story about rebels attacking a fortress that presented an analogy to the tumor situation. Despite having the solution right in front of them, the subjects could not solve the medical problem. The researchers found that telling the subjects that the story might help solve the problem resulted in almost all the subjects being able to apply the analogy. The researchers concluded that the problem was getting the subjects to use the analogy since the analogy itself was easy to use.

Willingham draws the conclusion that, “The problem is that previous critical thinking successes seem encapsulated in memory. We know that a student has understood an idea like the law of large numbers. But understanding it offers no guarantee that the student will recognize new situations in which that idea will be useful.” So how could this connect to the ability of people to hold to inconsistent beliefs?

As noted in my previous essays on inconsistent beliefs, people are good at believing inconsistent claims. Two claims are inconsistent when they both cannot be true, but both could be false. This is different from two claims being contradictory: if one claims contradicts another, one must be true and the other false. As also noted in previous essays, my inspiration for these essays was seeing social media posts by Trump supporters presenting and professing belief in inconsistent (and sometimes contradictory claims). To illustrate, Trump supporters tended to believe Trump’s claims that COVID-19 was no worse than the flu and that it was also a hoax.  When Bob Woodward released tapes proving that Trump acknowledged the danger of the virus  many Trump supporters accepted Trump’s claim that he wanted to play down the virus to avoid a panic. His supporters defended him, claiming great leaders lie to keep morale up in the face of terrible danger (something Plato might accept, given his noble lie). They also claimed he was right to do this in order to prevent panic in the face of a deadly virus. Laying aside all the moral issues here, there is an obvious logical problem: if Trump was right to lie to play down the virus because it is a terrible danger, then this is inconsistent with the claim that it is like the flu (or a hoax). So, if he had to lie because of the danger, then it is not like the flu (or a hoax). But if it is like the flu (or a hoax) then he did not need to lie about the danger. There was a bit of unpleasant fun had in getting a Trump supporter to profess belief in these inconsistent claims in the space of a short Facebook interaction; but almost anyone can easily be caught in holding inconsistent beliefs. The transference problem can help explain some of this.

As Willingham has shown, people are generally bad at transferring critical thinking skills between different situations. Differences in content, as he noted, can prevent people from seeing what can become obvious with the right context. Because of this, a person might be very good at discerning inconsistency in specific cases but fail in other cases. As an example, consider a Trump supporter who is very good at finding inconsistencies in claims made by liberals they disagree with. They are motivated to find such problems and continued practice can make them good at finding inconsistencies in this context. But if the context is switched to their own beliefs, the change can prevent skill transference. That is, they can readily see the inconsistencies of a liberal in one context but are unable to see their own inconsistencies. This is analogous to the subjects in the analogy experiment: they had the answer right in front of them but were blind to it until it was pointed out to them.

Put in general terms, people with strong political views can practice attacking and criticizing views they disagree with and develop critical thinking skills they can apply in very specific contexts. But people rarely subject their own beliefs to intense logical scrutiny. People almost never carefully compare their core beliefs to check for logical inconsistencies and so have little practice doing so. Hence, they will tend to be bad at noticing obvious inconsistencies. This, of course, assumes that people are being honest, they hold to the beliefs they are professing and are not lying as a strategy. It is to this that I will turn in my next essay.

Unlike the thinking machines of science fiction, human beings can easily believe inconsistent (even contradictory) claims. Based on experience, I am confident I still have inconsistent beliefs and false beliefs. I do not know which ones are false or inconsistent. If I knew, I would (I hope) stop believing the false ones and sort out the inconsistencies. Writing out my ideas helps in this process because others can see my claims and assess them. If someone can show that two of my beliefs are inconsistent (or contradictory) they are helping me weed the garden of my mind. But not everyone is grateful for this sort of help. Although, to be fair, criticism can arise from cruelty rather than honest concern.

While most people do not write extensively about their beliefs, many people present beliefs on social media, such as Facebook, Bluesky and X. Being a philosopher, I have the annoying trait of checking these claims for logical inconsistency and contradictions. Two claims are inconsistent if they both cannot be true at the same time; but they could both be false. If two claims are contradictory, one must be false and the other true.

As would be suspected, the political beliefs people profess are often inconsistent or even contradictory. I have, and perhaps so have you, seen posts making inconsistent or even contradictory claims. As a classic example, it was jarring to see a post mock people who took the COVID pandemic “hoax” seriously, assert that the “China Virus” is a dangerous bioweapon, and then conclude by praising Trump’s great handling of the pandemic and accusing the Democrats of trying to steal credit for the great vaccine that Trump created. It got even stranger when 5G and QAnon was thrown into the posts. Pointing out such inconsistencies usually causes people to angrily doubling down or make threats. I invite readers to provide examples of how “the libs” also hold inconsistent sets of beliefs. But keep in mind that inconsistency is a matter of logic, and a set of false claims can be consistent with each other. So how do people believe in such sets of clearly inconsistent beliefs?  Perhaps the concept of choice blindness can shed some light on the matter.

Back in 2005 Swedish researchers developed the concept of choice blindness after conducting an experiment involving choosing between two photos of faces. Each participant was asked which photo they found most attractive and then the researcher used sleight of hand to make the participant think they had been handed back the photo they picked. But the researcher gave them the photo they had not picked. While one would expect the subject to notice the switch, they generally did not and accepted the switched photo as the one they had picked. They even offered reasons as to why they had picked that photo in the first place (though they had actually rejected it). Follow up experiments yielded the same results for the taste of jam, financial choices, and eyewitness reports.

These results could be explained away in terms of weak preferences and other factors. For example, if a person is asked to pick between two photos and at that moment, they slightly prefer one, then it would not be surprising that they would easily change their mind. But one might think that political beliefs would be different especially in these highly polarized times.  But people seem to suffer from choice blindness here as well.

In 2018 an experiment was conducted in which participants were given a survey about political questions. The researchers gave the subjects false feedback and found that their beliefs tended to shift accordingly. This effect lasted up to a week and, interestingly, lasted even longer when the researchers asked the participants to defend “their choices.” For example, a person who originally favored raising taxes would be asked by the researchers about “their” view that taxes should not be raised. This person would then tend to believe that taxes should not be raised. The researchers’ explanation is a reasonable one: if a person thinks a belief is their belief, they will be free of many  factors that would have caused them to defend their original belief. This makes sense: if someone believes they believe something, then they will tend to believe and defend it. Roughly put, people believe what they believe they believe—even when they previously did not believe it. So how can this help explain the ability to believe inconsistent or even contradictory claims?

Based on the above, a person can initially believe one claim and then be easily switched to believing (and defending) a claim inconsistent with their original belief.  For example, a person who initially believes that a carbon tax would reduce emissions could have their belief switched by this method to believing (and defending) that carbon taxes would not do that.  These two claims are inconsistent, but a person can easily be switched from one to the other without apparently even noticing.

Now consider a person who believes inconsistent claims. When they make one claim, this would be analogous to their professing their original belief in the choice blindness experiment. When they profess an inconsistent claim, this would be analogous to them professing belief in the claim they were switched to believing by the researchers. In the case of holding inconsistent beliefs, a person would be switching themselves when they switched from professing one belief to professing belief in a claim inconsistent with the first belief. As such, a person would believe the first belief and then seamlessly switch to the inconsistent belief without noticing the inconsistency. Given that the experiment shows that people can be switched to opposite beliefs without noticing, it would be easy for people to hold to inconsistent beliefs without noticing the inconsistency. They believe one belief because they believe it; they believe an inconsistent belief because they believe that as well. That is, people believe what they think they believe and simply ignore or forget any inconsistencies.  While this is certainly not the whole story, choice blindness does shed some light on the ability people have to profess inconsistent beliefs.

Power holders in the United States tend to be white, male, straight, and (profess to be) Christian. Membership in these groups also seems to confer a degree of advantage relative to people outside of these groups. Yet, as been noted in the previous essays, some claim that the people in these groups are now the “real victims.” In this essay I will look at how a version of the fallacy of anecdotal evidence can be used to “argue” about who is “the real victim.”

The fallacy of anecdotal evidence is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a small number of cases. The fallacy is also committed when someone rejects reasonable statistical data supporting a claim in favor of a single example or small number of examples that go against the claim. The fallacy is sometimes taken to be a version of the hasty generalization fallacy (drawing a conclusion from a sample that is too small to adequately support that conclusion). The main difference between hasty generalization and anecdotal evidence is that the fallacy anecdotal evidence involves using a story (anecdote) as the sample.

Here is the form of the anecdotal evidence fallacy often used to “argue” that an advantaged group is not advantaged:

 

Premise 1: It is claimed that statistical evidence shows that Group A is advantaged relative to Group B

Premise 2: A member of Group A was disadvantaged relative to a member of Group B.

Conclusion: Group A is not advantaged relative to Group B (or Group B is not disadvantaged relative to Group A).

 

 

To illustrate:

 

Premise 1: It is claimed that statistical evidence shows that white Americas are advantaged relative to black Americans.

Premise 2: Chad, a white American, was unable to get into his first choice of colleges because affirmative action allowed Anthony, a black American, to displace him.

Conclusion: White Americans are not advantaged relative to black Americans.

 

The problem with the logic is that an anecdote does not suffice to establish a general claim because an adequately large sample is needed to make a strong generalization. But one must also be on guard against another sort of fallacy:

 

Premise 1: It is claimed that statistical evidence shows that Group A is advantaged relative to Group B.

Premise 2: Member M of Group A is disadvantaged relative to Member N of Group B.

Conclusion: The disadvantage of M is morally acceptable, or M is not really disadvantaged.

 

To illustrate:

 

Premise 1: It is claimed that statistical evidence shows that men are advantaged relative to women.

Premise 2: Andy was disadvantaged relative to his boss Sally when she used her position to sexually harass him.

Conclusion: The disadvantage of Andy is morally acceptable, or Andy was not really disadvantaged.

 

 

While individual cases do not disprove a body of statistical evidence they should not be ignored. As in the illustration given above, while men generally have a workplace advantage over women, this does not entail that individual men are never at a disadvantage relative to individual women. It also does not entail that, for example, men cannot be the victims of sexual harassment by women.  As another illustration, while white men dominate academics, business, and politics, this does not entail that there are not injustices against specific white men in such things as admission, hiring and promotions. These sorts of situations can lead to moral debates about harm.

One excellent example is the debate over affirmative action. An oversimplified justification is that groups that have been historically disadvantaged are given a degree of preference in the selection process. For example, a minority woman might be given preference over a white woman in the case of college admission. The usual moral counter is that the white woman is wronged by this: if she is better qualified, then she should be admitted, even if this entails that the college population will remain almost entirely white.

The usual counter to this is that the white woman is likely to appear better qualified because she has enjoyed the advantages conferred from being white. For example, her ancestors might have built wealth by owning the ancestors of the black woman who was admitted over her and this inherited wealth meant that her family has been attending college for generations, that she was able to attend excellent schools, and that her family could pay for tutoring and test preparation.

This can be countered by other arguments, such as how the woman did not own slaves herself, so it is unfair for her to not be admitted on the “merit” arising from all these advantages arising from generational wealth. One can, of course, consider scenarios such as cases in which the black woman is from a wealthy family while the white woman is from a poor family. Such cases can, of course, be considered in terms of economic class and one could argue that class should also be a factor. This obviously all leads to the moral issue of whether it is acceptable to inflict some harm on specific members of advantaged groups to address systematic disadvantages, which goes way beyond the scope of this essay.

Fortunately, I do not need to settle this issue here. This is because even if such anecdotes are examples of morally wrong actions, they do not disprove the general statistical claims about relative advantages and disadvantages between groups. For example, even if a few white students are wronged by affirmative action when they cannot attend their first pick of schools, these anecdotes do not disprove the statistical evidence of the relative advantage conferred by being white in America. After all, the claim of advantage is not that each white person is always advantaged over everyone else on an individual-by-individual basis. Rather it is about the overall advantages that appear in statistics such as wealth and treatment by the police. As such, using anecdotes to “refute” statistical data is, as always, a fallacy. But what about cases in which members of an advantaged group do suffer a statistically meaningful disadvantage in one or more areas?

While falling victim to the fallacy of anecdotal evidence is bad logic, it is not an error to consider that members of an advantaged group might face a significant disadvantage (or harm) because of their membership in that advantaged group. As would be expected, any example used here will be controversial. I will use the Fathers’ Rights movement as the example. The central claim behind this movement is that fathers are systematically disadvantaged relative to mothers. While there are liberal and conservative versions, the general claim is that fathers and mothers should have parity in the legal system on this matter. Critics, as would be expected, claim that men tend to already enjoy a relative advantage here. But if the Fathers’ Rights movement is correct about fathers being systematically disadvantaged relative to mothers, then this would not be mere anecdotal reasoning. That is, it would not just be a few cases in which individual fathers were disadvantaged relative to a few individual mothers, it would be systematic injustice. But would this area of relative disadvantage disprove the claim of general advantage? Let us look at the reasoning:

 

Premise 1: It is claimed that statistical evidence shows that Group A is advantaged relative to Group B.

Premise 2: But Group A is disadvantaged relative to Group B in specific area C.

Conclusion: Group A is not advantaged relative to Group B.

 

As presented, this would be an error in reasoning because Group A being disadvantaged in one area would not prove that the group is not advantaged relative to Group B when all areas are considered. To use an analogy, the fact that Team B outscored Team A in the fifth inning of a baseball game does not entail that B is leading. It must be noted that a similar argument with multiple premises like Premise 2 could show that Group A is not advantaged relative to Group B. After all, establishing adequate statistical evidence would obviously be adequate. There are, of course, questions about how to determine relative advantage and these can be debated in good faith. One obvious point of dispute would be the matter of weighting. For example, if fathers are disadvantaged relative to mothers, how would this count relative to the pay gap between men and women? And so on for all areas of comparison. This does show the need to consider each area as well as a need for assessing value but this is not unique to the situation at hand and one could, as is often done, assign crude dollar values to do the math.

In closing, while individual wrongs and wrongs done to members of advantaged groups as members of that group can occur, they do not automatically disprove the statistical data. 

As the death toll from COVID-19 rose, people on social media started asking if anyone personally knew someone who had gotten COVID or died from it. I first thought they were curious or concerned but then I noticed a correlation: people who asked this question tended to be COVID doubters. For them, the question was not a sincere inquiry but a rhetorical tactic and an attempt to lure people into fallacious reasoning. In this essay I will look at this sort of question as a rhetorical tool.

This question can be raised about things other than COVID, so the generic question is “do you personally know anyone who X?” Used as rhetoric, the purpose is to garner either a “no” responses or no response at all. If this succeeds, it can create the impression that X is rare or does not occur. It can also create the impression that X is not serious. In the case of COVID, one goal was to create the impression that COVID is rare. Another goal was to create the impression that it is not that bad. Future pandemics will see the tactic used again.

Rhetoric is logically neutral in that it neither counts for nor against the truth of a claim. Its purpose is to influence feelings, and this is often aimed at making it easier to get people to accept or reject a claim. To use an analogy, rhetoric is like the flavoring or presentation of food: it makes it more (or less) appealing but has no effect on nutritional value. As flavoring and presentation is compatible with serving nutritional food, rhetoric is compatible with serving plausible claims and good arguments. Rhetoric can be used to influence an audience to accept a true claim. For example, a person who wants to protect sharks might address worries about shark attacks by asking the audience if anyone has been attacked by a shark. They are hoping that no one will say “yes” and plan on using that to make the audience receptive to their boring statistics showing that shark attacks are incredibly rare

There is an obvious risk in using this rhetorical device: it can backfire if someone says “yes”, especially if they tell a vivid story. Psychologically, people are influenced more by anecdotes (especially vivid ones) than by dull statistics. This underlies the fallacies of anecdotal evidence (rejecting statistical data in favor of a story) and misleading vividness (estimating likelihood based on how vivid an event is rather than based on how often it occurs). In the case of the shark example, if someone stands up and says a shark bit their arm off, then this will probably outweigh the statistical data about shark attacks in the minds of the audience. As such, this method can be risky to use.

If this tactic backfires and you are making a true claim, you can try to get the audience to accept the statistical data while honestly acknowledging that rare events can occur. If this tactic backfires and you are trying to deceive the audience, then there are various rhetorical tactics and fallacies that can be used. One tactic is to launch an ad hominem attack on the person who says “yes” and the usual approach is to accuse them of lying. If the attack is successful, this can make the rhetoric even more effective as those who fall for it will tend to reject anyone else who says “yes.” This is, of course, unethical.

It must also be noted that this sort of rhetoric can also be aimed at getting a “yes” response, though this is less common than the one aimed at getting “no.” The same general principles apply to this version.

If you want to be a critical thinker, you should recognize the rhetorical device that proves nothing. It must also be noted that its use disproves nothing because it would be an error to reject a person’s claim because they use this (or any) rhetoric. While rhetoric is neutral, fallacies are always bad, and this sort of question can be seen as being fallacy bait. That is, it is aimed at getting people to use or fall for fallacious reasoning.

One possibility is that the question is aimed at getting the audience to engage in the fallacy of anecdotal evidence. This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy is also committed when someone rejects reasonable statistical data supporting a claim in favor of a single example or small number of examples that go against the claim. It has the following forms:

 

Form One

Premise 1: Anecdote A is told about a member (or small number of members) of Population P.

Conclusion: Claim C is inferred about Population P based on Anecdote A.

 

Form Two

Premise 1: Reasonable statistical evidence S exists for general claim C.

Premise 2: Anecdote A is presented that is an exception to or goes against general claim C.

Conclusion: General claim C is rejected.

 

It can also be used to lure people into accepting or making the hasty generalization fallacy. This fallacy is committed when a person draws a conclusion about a population based on a sample that is not large enough. It has the following form:

 

Premise 1: Sample S, which is too small, is taken from population P.

Conclusion: Claim C is drawn about Population P based on S.

 

The person committing the fallacy is misusing the following type of reasoning, which is known variously as Inductive Generalization, Generalization, and Statistical Generalization:

 

Premise 1: X% of all observed A’s are B’s.

Conclusion: Therefore X% of all A’s are B’s.

 

The fallacy is committed when not enough A’s are observed to warrant the conclusion. If enough A’s are observed, then the reasoning would not commit the hasty generalization fallacy. As you might have noticed, anecdotal evidence and hasty generalization are similar: both involve drawing a general conclusion based on a sample that is too small.

The “do you personally know anyone who X?” question can be used to lure people into making or accepting these fallacies in the following ways. If a few people respond “no”, then these can be taken as anecdotes that “prove” that X does not happen often (or is not serious). These “no” responses could also be taken as “disproving” a claim that is based on good statistical evidence. They could also be used as the basis of hasty generalization. For example,  to infer that because a few people said “no” to a question on Twitter, then the same holds true for the general population. A lack of responses could also be used as “evidence” in a hasty generalization. For example, someone might reason like this: no one responded “yes” to a question on Facebook, so the answer must be “no” for the general population.

While I have been focused on people raising the question in contexts in which they can get an answer, the tactic can be used in one-way communication as well (such as a YouTube video or televised speech). A person can ask this sort of question in the hope that their target audience will be influenced. For example, a politician might ask “do you personally know anyone who has died of COVID?” in the hopes of getting the audience to believe that the COVID death toll presented by credible media sources is exaggerated.

It must be noted that the same fallacies can be committed with “yes” answers. To illustrate, if a few people respond with “yes” to a Twitter question, it would also be an error to generalize to the entire population. It must also be noted that if the question is being asked in a properly conducted survey that has a large and unbiased sample, then this would probably not be intended to lure people into a fallacy. The conclusion of such a strong generalization would be reasonable to believe. Of course, the conclusion might be that many people believe something that is untrue, but it would be reasonable to believe that many people (mistakenly) believe that untrue claim.

The tactic of using this rhetorical question to bait people into fallacies is most effective when the X is something that is statistically uncommon so there is a good chance that an individual would not personally know someone who X. If X is common or the truth about X is well accepted, then this tactic will usually fail. For example, asking “do you personally know anyone who has heart disease?” would not be an effective way to get people to engage in fallacious reasoning about heart disease. This is because many people know people who have heart disease, and it is well known that it is common. As such, this tactic usually requires an X that is not too common, and which is not well known. But it is possible to undermine belief and make this tactic work.

This tactic can be effective in situations in which an occurrence is significant or serious, yet it is uncommon enough that many people will not personally know someone who has been affected. Take, for example, COVID-19. Back during the early days of the pandemic, I had 826 friends on Facebook. At that time, I personally knew two people who had been infected and did not (yet) personally know anyone who had died. As such, it would have seemed almost reasonable to infer that COVID-19 was not a big deal. However, I also do not know anyone personally who was killed on 9/11. Although I personally know several people who are active duty or veterans, I do not know anyone personally who was killed in action. I could go through lists of causes of death or serious injuries/illness and note that I do not personally know anyone who died or was other harmed. But it should be obvious that it would be an error to infer that such things do not happen or that they are not serious. In the case of COVID, it is not surprising that I did not personally know someone who died in the early days of the pandemic. Given the scope of who I personally know, it was statistically unlikely that a person who died of COVID would be within that small group. But it does not follow that the death toll from COVID presented by reputable media sources was untrue nor does it follow that COVID was not serious. After all, few would question that 9/11 occurred or was not serious because they did not personally know someone who died that day.

In closing, my main point is to be on guard against being misled by questions like “do you personally know anyone who died of COVID?” While they might be asked sincerely, they can be a rhetorical tactic aimed at baiting you into a fallacy. As the next pandemic is fast approaching, we can expect to see this tactic deployed again.

Suppose you saw a headline saying, “President admits activity was criminal in nature.” If you loath the president, you might infer he did something criminal and rush to post the article on Facebook or tweet it. If you support the president, you will probably interpret it in a way favorable to the president. You might assume the activity was by some enemy of the president or perhaps someone in the administration who betrayed the president with their misdeeds. You might even conclude that it is fake news. But if you are a critical thinker, you would read the article and assess its credibility before drawing an inference about the activity. This headline is an example of a misleading headline, because very different articles could have the same headline.

Saying “the president admits” would tend to lead people to think the president is involved in the activity; either that he committed the act, or someone connected to him did. But the article could be different from what it seems to imply. For example, the article might state that the president is agreeing that an act of violence done by one of his supporters was a crime. As another example, the headline could be extremely misleading, and the president might have made a quick remark about something completely unrelated to him that he agreed is a crime.

For the sake of this essay, I will adopt the general term of “headlining” to cover three aspects of misleading headlines.  The first is the intentional creation of a misleading headline as a rhetorical technique. A misleading headline is not a complete fabrication as that would simply be lying. A misleading headline has some connection to the truth but it is such that it is aimed to deceive the audience. This can be done in a variety of ways, such as using hyperbole (extravagant exaggeration), downplaying (casting it as less serious or less important), using vague or ambiguous wording, or by other rhetorical techniques.

There are, many reasons to create misleading headlines and more than one can apply. One common reason is to create a clickbait headline to generate ad revenue; the idea is that an honest headline would not be as interesting. I am not saying that headlines should be written in a dull manner and a headline that might seem misleading could be defended if it was intended to be interesting rather than to mislead. While there will be unclear cases, we can sort out the intentionally misleading headlines from those with the honest intent to be interesting. It is also worth noting that writers can create misleading headlines unintentionally due to failure of skill rather than a failure of honesty.

Another reason for a misleading headline is as a tool to influence the audience without using outright falsehood. Many biased sites and organizations have two seemingly conflicting goals. The first is to push a narrative and shape beliefs. The second is to retain some credibility as source of information.  Misleading headlines sitting atop factually correct stories allow a site to achieve both goals: the headlines allow them to mislead while the stories allow them to claim they are doing truthful reporting. The writers and editors might even have moral qualms about lying outright but be willing to mislead without technically lying.

The second aspect of headlining is when a reader is influenced to believe what the misleading headline is intended to imply. That is, they have been tricked into believing an untrue interpretation. For example, a person seeing the headline “President admits activity was criminal in nature” used by a site hostile to the president might interpret it as “president admits he committed a crime” and rush to Facebook to post about it. In truth, the president might have just agreed when asked if some crime done by a foreign leader was a crime. In this case the person is a victim of deceit: they believed the news source but have been misled by the headline. This is different from believing an outright lie as a misleading headline is not a complete fabrication and it often sits atop content that is not entirely untrue.

In such a case, the person is making three mistakes. The first is interpreting the headline in a misleading way without considering other plausible interpretations. The second is not reading the article. The third is not being critical of the claim and assessing it. The defense against falling for misleading headlines involves avoiding these three errors.

The third aspect of headlining is intentional misuse of misleading headlines. This occurs when the person is aware that the headline is misleading, but they use of it for their own purposes, often by posting the article on social media with their preferred interpretation of the headline. For example, a person who loathes the president might know that the “President admits activity was criminal in nature” headline is about the president agreeing that a foreign leader committed a crime. But they might post a link to the article while making some claim about the president’s guilt in the hope that others will be misled.

A person might even go so far as to create an entire YouTube video based on intentionally misinterpreting headlines. Such people might be called out for this by someone else on YouTube. People can, of course, also just lie about the content of an article and use that to make their straw man argument.

A defense against this tactic has three parts. The first is questioning the interpretation and considering other plausible interpretations. The second is to read the actual article to see the content. The third is being critical of the claims made and applying the rational methods of claim assessment. So, always go beyond the headlines.