While fake news presumably dates to the origin of news, the 2016 United States presidential election saw a huge surge in the volume of fakery. While some of it arose from partisan maneuvering, the majority seems to have been driven by the profit motive: fake news drives revenue generating clicks. While the motive might have been money, there has been serious speculation that the fake news (especially on Facebook) helped Trump win the 2016 election. While those who backed Trump would presumably be pleased by this outcome, the plague of fake news should be worrisome to anyone who values the truth, regardless of their political ideology. After all, fake news could presumably be just as helpful to the left as the right. That said, the right lies while the mainstream left remains silent. In any case, fake news is damaging and is worth combating.

While it is often claimed that most do not have the time to be informed about the world, if someone has the time to read fake news, then they have the time to think critically about it. This critical thinking should, of course, go beyond just fake news and should extend to all important information. Fortunately, thinking critically about claims is surprisingly quick and easy.

I have been teaching students to be critical about claims in general and the news in particular for over two decades and what follows is based on what I teach in class (drawn, in part, from the text I have used: Critical Thinking by Moore & Parker). I would recommend this book for general readers if it was not, like most textbooks, absurdly expensive. But, to the critical thinking process that should be applied to claims in general and news in particular.

While many claims are not worth the effort to check, others are important enough to subject to scrutiny. When applying critical thinking to a claim, the goal is to determine whether you should rationally accept it as true, reject it as false or suspend judgment. There can be varying degrees of acceptance and rejection, so it is also worth considering how confident you should be in your judgment.

The first step in assessing a claim is to match it against your own observations, should you have relevant observations. While observations are not infallible, if a claim goes against what you have directly observed, then that is a strike against accepting the claim. This standard is not commonly used in the case of fake news because most of what is reported is not something that would be observed directly by the typical person. That said, sometimes this does apply. For example, if a news story claims that a major riot occurred near where you live and you saw nothing happen there, then that would indicate the story is in error.

The second step in assessment is to judge the claim against your background information. This is all your relevant beliefs and knowledge about the matter. The application is straightforward and just involves asking yourself if the claim seems plausible when you give it some thought. For example, if a news story claims that Joe Biden plans to start an armed rebellion against Trump, then this should be regarded as wildly implausible by anyone with true background knowledge about Biden.

There are, of course, some obvious problems with using background information as a test. One is that the quality of background information varies and depends on the person’s experiences and education (this is not limited to formal education). Roughly put, being a good judge of claims requires already having accurate information stored away in your mind. All of us have many beliefs that are false; the problem is that we generally do not know they are false. If we did, then we would no longer believe them. Probably.

A second point of concern is the influence of wishful thinking. This is a fallacy (an error in reasoning) in which a person concludes that a claim is true because they want it to be true. Alternatively, a person can fallaciously infer that a claim is false because they want it to be false. This is poor reasoning because wanting a claim to be true or false does not make it so. Psychologically, people tend to disengage their critical faculties when they really want something to be true (or false).

For example, someone who really hates Trump would want to believe that negative claims about him are true, so they would tend to accept them uncritically. As another example, someone who really likes Trump would want positive claims about him to be true, so they would accept them without thought.

The defense against wishful thinking of this sort is to be on guard against yourself by being aware of your biases. If you really want something to be true (or false), ask yourself if you have any reason to believe it beyond just wanting it to be true (or false). For example, I am not a fan of Trump and thus would tend to want negative claims about him to be true. So, I must consider that when assessing such claims. Unfortunately for America, much of what Trump claims is objectively untrue.

A third point of concern is related to wishful thinking and could be called the fallacy of fearful or hateful thinking. While people tend to believe what they want to believe, they also tend to believe claims that match their hates and fears. That is, they engage in the apparent paradox of believing what they do not want to believe. Fear and hate impact people in a very predictable way: they make people stupid when it comes to assessing claims.

For example, there are Americans who hate and fear that migrants will eat cats and dogs. While they would presumably wish that claims about this were false, they will often believe such claims because they correspond with their hate and fear. Ironically, their great desire for it to not be true motivates them to feel that it is true, even when it is not.

The defense against this is to consider how a claim makes you feel. If you feel hatred or fear, you should be very careful in assessing the claim. If a news claim seems tailored to push your buttons, then there is a decent chance that it is fake news. This is not to say that it must be fake, just that it is important to be extra vigilant about claims that are extremely appealing to your hates and fears. This is a very hard thing to do since it is easy to be ruled by hate and fear.

The third step involves assessing the source of the claim. While the source of a claim does not guarantee the claim is true (or false), reliable sources are obviously more likely to get things right than unreliable sources. When you believe a claim based on its source, you are making use of what philosophers call an argument from authority. The gist of this reasoning is that the claim being made is true because the source is a legitimate authority on the matter. While people tend to regard as credible sources those that match their own ideology, the rational way to assess a source involves considering the following factors.

First, the source needs to have sufficient expertise in the subject matter in question. One rather obvious challenge here is being able to judge whether the specific author or news source has sufficient expertise. In general, the question is whether a person (or the organization in general) has the relevant qualities and these are assessed in terms of such factors as education, experience, reputation, accomplishments and positions. In general, professional news agencies have such experts. While people tend to dismiss Fox, CNN, and MSNBC depending on their own ideology, their actual news (as opposed to editorial pieces or opinion masquerading as news) tends to be factually accurate. Unknown sources tend to be lacking in these areas. It is also wise to be on guard against fake news sources pretending to be real sources. This can be countered by checking the site address against the official and confirmed address of professional news sources.

Second, the claim made needs to be within the source’s area(s) of expertise. While a person might be very capable in one area, expertise is not universal. So, for example, a businessman talking about her business would be an expert, but if she is regarded as a reliable source for political or scientific claims, then that would be an error (unless she also has expertise in these areas).

Third, the claim should be consistent with the views of the majority of qualified experts in the field. In the case of news, using this standard involves checking multiple reliable sources to confirm the claim. While people tend to pick their news sources based on their ideology, the basic facts of major and significant events would be quickly picked up and reported by all professional news agencies such as Fox News, NPR and CNN. If a seemingly major story does not show up in the professional news sources, there is a good chance it is fake news.

 It is also useful to check with the fact checkers and debunkers, such as Politifact and Snopes. While no source is perfect, they do a good job assessing claims. Something that does not make liars very happy. If a claim is flagged by these reliable sources, there is an excellent chance it is not true.

Fourth, the source must not be significantly biased. Bias can include such factors as having a very strong ideological slant (such as MSNBC and Fox News) as well as having a financial interest in the matter. Fake news is typically crafted to feed into ideological biases, so if an alleged news story seems to fit an ideology too well, there is a decent chance that it is fake. However, this is not a guarantee that a story is fake. Reality sometimes matches ideological slants. This sort of bias can lead real news sources to present fake news; you should be critical even of professional sources-especially when they match your ideology.

While these methods are not flawless, they are very useful in sorting out the fake from the true. While I have said this before, it is worth repeating that we should be even more critical of news that matches our views. This is because when we want to believe, we tend to do so too easily.

While I was required to take Epistemology in graduate school, I was not interested in the study of knowledge until I started teaching it. While remaining professionally neutral in the classroom, I now include a section on the ethics of belief in my epistemology class and discuss, in general terms, such things as tribal epistemology. Outside of the classroom I am free to discuss my own views on epistemology in the context of politics, and it is a fascinating subject. My younger self from graduate school would be surprised at the words “epistemology” and “fascinating” used together.

While COVID-19 was a nightmare for the world, the professed beliefs of Trump supporters about the pandemic provides an excellent case study in belief. As anyone familiar with these beliefs knows, they form a strange set of inconsistent and even contradictory claims. I am not claiming that every Trump supporter believes all these claims and I am not claiming that only Trump supporters believe them; but these are all claims professed by those who support Trump.

At the start of the pandemic Trump placed the blame on China and referred to the “the China virus.” His supporters generally accepted this view. The role of China varies depending on which explanation is offered. Some make the true claim that it originated in China. Others make the unsupported claim that it escaped (or was released intentionally) from a lab. On this view, the virus is generally presented as something bad. After all, it makes no sense to blame China unless the virus is a real problem.

There are also other conspiracy theories about the pandemic. One infamous theory is that the pandemic was real but caused by 5G. This would be inconsistent with the China virus theory; but one could preserve the China link by claiming that 5G technology is made in China

Trump also advanced the idea that the pandemic did not exist, that it was a hoax. This was echoed by his supporters—although some also advanced the theory that the Democrats infected Trump with the virus. The hoax idea was presented in various ways. For example, on some accounts the virus does exist but is no worse than the flu. This view led to an active anti-mask movement and death threats against public health experts. The anti-mask views make sense if one thinks the virus was a hoax but makes less sense if one thinks that the virus was bad enough to warrant making China pay. If it was a hoax perpetrated by the Democrats, then it makes no sense to hold China accountable. And if the virus did real damage and China should pay, then it makes no sense to claim it is a hoax. To be fair, these could be combined into the claim that China and the Democrats ran a worldwide hoax with the cooperation of all governments to harm Trump. Reconciling the 5G theory with the hoax theory would be challenging: if 5G was the cause of the pandemic, then it was not a hoax. And if it was a hoax, there was no pandemic for 5G to cause.

While Trump supporters profess to believe the pandemic was a hoax, over 80% of Republicans claimed to believe that Trump has done a great job with the pandemic.  His supporters claimed that he took rapid action (he did not) and that his response was very effective (it was not). Trump has also attempted to take credit for the forthcoming vaccines and has claimed, without evidence, that the FDA and Democrats stalled the vaccines. If the pandemic was a hoax, then it makes no sense to claim that Trump acted rapidly and effectively to counter the pandemic.  This is because there would be no pandemic to counter. It could be claimed that Trump acted to counter the hoax, but this would be hard to reconcile with Trump’s claims about the vaccine. If the pandemic was a hoax, then there was no need for a vaccine and taking credit for a useless vaccine would be silly. A Trump supporter could take the view that the pandemic was no worse than the flu and then credit Trump with addressing something no worse than the flu and developing the equivalent of a flu vaccine. But to the degree that Trump downplayed (lied about) the pandemic, this would undercut claims of how significant his alleged success should be considered.

As I noted earlier, I am not claiming that every Trump supporter believes all these claims. For example, the 5G pandemic theory was not universally embraced by Trump supporters (and is surely held by some who do not support him). However, Trump supporters generally seem to profess belief in many of these claims even though they are not consistent, and some would seem to lead to contradictions.

In logic, two claims are inconsistent when both could be false, but both cannot be true. To use my usual example, the claim that my water bottle contains only vodka and the claim that it contains only water are inconsistent with each other. If the bottle contains only vodka, then it does not contain only water and vice versa. But both could be false: the bottle could be empty. Or it could contain tequila. Many of the claims Trump supporters profess to believe about the pandemic seem inconsistent. For example, the claim that the pandemic was caused by 5G is not consistent with the claim that it is a hoax.

In logic, two claims contradict one another when one of them must be false and the other must be true. A contradiction is a claim that must be false and is false because of its logical structure. The stock example in logic is the conjunction P & -P. Since a conjunction is true when the two claims being conjoined are true and false otherwise, this claim is always false, at least on the assumption that any claim is true or false (but not both). So, if P is true, then -P must be false (and vice versa). Some of the claims Trump supporters profess to believe would seem to entail contradictory claims. For example, if it is claimed that the pandemic was caused by 5G, then this would entail that the pandemic is not a hoax which would contradict the claim that it is a hoax. Naturally, one could argue that the pandemic was caused by 5G and is also a hoax provided that the nature of the hoax is defined in a way that allows it to be caused by 5G.  As another example, the conspiracy theory that the pandemic was caused by a bioweapon released (intentionally or not) by China (or someone else) would entail that it was not a hoax. This would contradict the claim that it is a hoax. Again, one could try to craft the hoax claims in a way that the pandemic is both a hoax and caused by a bioweapon. Claiming that it is a hoax about a bioweapon would not do this, since a hoax about a bioweapon is not a bioweapon it is just a hoax.

From the standpoint of truth-functional logic (a logic in which the truth of a claim depends on the truth of the parts), the claims made by Trump supporters about the pandemic cannot all be true. In science-fiction, a robot or computer that attempted to accept all these claims would suffer some sci-fi logic failure, perhaps exploding. In reality, mapping out the logical relations between these claims would show that they cannot all be true and there would be no explosions (one hopes). But there is the interesting question of how people can hold to beliefs that cannot all be true and some of which lead to contradictions.

In philosophy, epistemologists (and others) often speak of beliefs as having intentionality. That is, beliefs have aboutness. When a person believes something about their world, they take their belief to correspond to reality. But while a belief has aboutness it need not be about reality. As an example, if Ted believes in unicorns, his belief is about unicorns (although philosophers disagree about beliefs about things that are not real) but not about real unicorns. Because there are no unicorns. People can also believe that all the claims in a set are true, even though it is not possible for them all to be true. That is, that set contains beliefs that are inconsistent with each other (or even contradictory). A person can even believe that a contradiction is true. Unlike truth functional logic, the truth of the claim “Person A believes claim C” does not depend on the truth of the parts; only on the truth of the claim about A believing C. A crude way to look at the matter is to see belief as like a Word file in which one can type any sentence rather than being like a computer program or circuit design that would fail if it contained logical inconsistencies or contradictions. So, saying that a person believes something is like saying it is in their Word file. Humans are clearly able to believe sets of inconsistent claims and even act on those beliefs which raises many interesting questions about belief formation and how belief impacts actions. As a closing point, people can certainly reconcile apparently inconsistent beliefs by not really believing in some or all of them professing that a claim is true when one believes it is not. That is, lying.

After each eruption of gun violence, there is also a corresponding eruption in the debates over gun issues. As with all highly charged issues, people are primarily driven by their emotions rather than by reason. Being a philosopher, I like to delude myself with the thought that it is possible to approach an issue rationally. Like many other philosophers, I am irritated when people say things like “I feel that there should be more gun control” or “I feel that gun rights are important. Because of this, when I read student papers I strike through all “inappropriate” uses of “feel” and replace them with “think.” This is, of course, done with a subconscious sense of smug superiority. Or so it was before I started reflecting on emotions in the context of gun issues. In this essay I will endeavor a journey through the treacherous landscape of feeling and thinking in relation to gun issues. I’ll begin with arguments.

As any philosopher can tell you, an argument consists of a claim, the conclusion, that is supposed to be supported by the evidence or reasons, the premises, that are given. In the context of logic, as opposed to that of persuasion, there are two standards for assessing an argument. The first is an assessment of the quality of the logic: determining how well the premises support the conclusion. The second is an assessment of the plausibility of the premises: determining the quality of the evidence.

On the face of it, assessing the quality of the logic should be an objective matter. For deductive arguments (arguments whose premises are supposed to guarantee the truth of the conclusion), this is the case. Deductive arguments can be checked for validity using such things as Venn diagrams, truth tables and proofs. If a person knows what she is doing, she can confirm beyond all doubt whether a deductive argument is valid or not. A valid argument is an argument such that if its premises were true, then its conclusion must be true. While a person might stubbornly refuse to accept a valid argument as valid, this would be as foolish as stubbornly refusing to accept that 2+2= 4 or that triangles have three sides. As an example, consider the following valid argument:

 

Premise 1: If an assault weapon ban would reduce gun violence, then congress should pass an assault weapon ban.

Premise 2: An assault weapon ban would reduce gun violence.

Conclusion: Congress should pass an assault weapon ban.

 

This argument is valid; in fact, it is an example of the classic deductive argument known as modus ponens or affirming the antecedent. As such, questioning the logic of the argument would just reveal one’s ignorance of logic. Before anyone gets outraged, it is important to note that an argument being valid does not entail that any of its content is true. While this endlessly confuses students, though a valid argument that has all true premises must have a true conclusion, a valid argument need not have true premises or a true conclusion. Because of this, while the validity of the above argument is beyond question, one could take issue with the premises. They could, along with the conclusion, be false although the argument is unquestionably a valid deductive argument. For those who might be interested, an argument that is valid and has all true premises is a sound argument. An argument that does not meet these conditions is unsound.

Unfortunately, there is usually no perfect, objective test for the truth of a premise. In general, premises are assessed in terms of how well they match observations, background information and credible claims from credible sources (which leads to concerns about determining credibility). As should be expected, people tend to prefer premises that match their feelings. This is true for everyone, be that person the head of the NRA or a latte sipping liberal academic who trembles at the thought of even seeing a gun. Because of this, a person who wants to fairly and justly assess the premises of any argument must be willing to understand their own feelings and work out how they influence their judgment. Since people, as John Locke noted in his classic essay on enthusiasm, tend to evaluate claims based on the strength of their feelings, doing this is difficult. People think they are right because they feel strongly about something and are least likely to engage in critical assessment when they feel strongly.

While deductive logic allows for perfectly objective assessment, it is not the logic that is commonly used in debates over political issues or in general. The most used logic is inductive logic.

Inductive arguments are arguments, so an inductive argument will have one or more premises that are supposed to support a conclusion. Unlike deductive arguments, inductive arguments do not offer certainty and instead deal in likelihood. A logically good inductive argument is called a strong argument: one whose premises, if true, would probably make the conclusion true. A bad inductive argument is a weak one. Unlike the case of validity, the strength of an inductive argument is judged by applying the standards specific to that sort of inductive argument to the argument in question. Consider, as an example, the following argument:

 

Premise 1: Tens of thousands of people die each year as a result of automobiles.

Premise 2: Tens of thousands of people die each year as a result of guns.

Premise 3: The tens of thousands of deaths by automobiles are morally acceptable.

Conclusion: The tens of thousands of deaths by gun are also morally acceptable.

 

This is a simple argument by analogy in which it is argued that since cars and guns are alike, if we accept automobile fatalities then we should also accept gun fatalities. Being an inductive argument, there is no perfect, objective test to determine whether the argument is strong or not. Rather, the argument is assessed in terms of how well it meets the standards of an argument by analogy. The gist of these standards is that the more alike the two things (guns and cars) are alike, the stronger the argument. Likewise, the less alike they are, the weaker the argument.

While the standards are reasonably objective, their application admits considerable subjectivity. In the case of guns and cars, people will differ in terms of how they see them in regard to similarities and differences. As would be suspected, the lenses through which people see this matter will be deeply colored by their emotions and psychological backstory. As such, rationally assessing inductive arguments is especially challenging: a person must sort through the influence of emotions and psychology on her evaluation of both the premises and the reasoning. Since arguments about guns are generally inductive, it is no wonder it is a messy, even on the rare occasions when people are sincerely trying to be rational and objective.

The lesson here is that a person needs to think about how she feels before she can think about what she thinks. Since this also applies to me, my next essay will be about exploring my psychological backstory in regard to guns.

Back in 2016 my husky, Isis, and I had slowed down since we teamed up in 2004 because pulling so many years will slow down man and dog. While Isis faced a crisis, most likely due to the wear of time on her spine, the steroids she was prescribed helped address the pain and inflammation and  for a while she was tail up and bright eyed once more.

In my previous essay I looked at using causal reasoning on a small sale by applying the methods of difference and agreement. In this essay I will look at thinking critically about experiments and studies.

The gold standard in science is the controlled cause to effect experiment. The objective of an experiment is to determine the effect of a cause. As such, the question is “I wonder what this does?” While conducting such an experiment can be complicated and difficult, the basic idea is simple.

The first step is to have a question about a causal agent. For example, it might be wondered what effect steroids have on arthritis in elderly dogs. The second step is to determine the target population, which might already be taken care of in the first step, in the example, elderly dogs would be the target population. The third step is to pull a random sample from the target population. This sample needs to be representative, which means it needs to be like the target population. For example, a sample from the population of elderly dogs would ideally include all breeds of dogs, male dogs, female dogs, and so on for all relevant qualities of dogs. If a sample is not properly taken it can be biased. The problem with a biased sample is that the inference will be weak because the sample might not be adequately like the general population. The sample also needs to be large enough. A sample that is too small will also fail to adequately support the inference drawn from the experiment.

The fourth step involves splitting the sample into the control group and the experimental group. These groups need to be as similar as possible (and can be made of the same individuals). The reason they need to be alike is because in the fifth step the experimenters introduce the cause (such as steroids) to the experimental group and the experiment is run to see what difference this makes between the two groups. The final step is getting the results and determining if the difference is statistically significant. This occurs when the difference between the two groups can be confidently attributed to the presence of the cause (as opposed to chance or other factors). While calculating this can be complicated, when assessing an experiment (such as a clinical trial) it is easy enough to compare the number of individuals in the sample to the difference between the experimental and control groups. This handy table from Critical Thinking makes this easy and also shows the importance of having a large enough sample.

 

Number in Experimental Group

 (with similarly sized control group)

Approximate Figure That the difference Must Exceed

To Be Statistically Significant

(in percentage points)

10 40
25 27
50

19

100 13
250 8
500 6
1,000 4
1,500 3

 

Many “clinical trials” mentioned in articles and blog posts have very small samples sizes and this can make their results all but meaningless. This table also shows why anecdotal evidence is fallacious: a sample size of one is useless when it comes to an experiment.

The above table also assumes that the experiment is run correctly: the sample was representative, the control group was adequately matched to the experimental group, the experimenters were not biased, and so on for all the relevant factors. As such, when considering the results of an experiment it is important to consider those factors as well. If, for example, you are reading an article about an herbal supplement for arthritic dogs and it mentions a clinical trial, you would want to check on the sample size, the difference between the two groups and determine whether the experiment was also properly conducted. Without this information, you would need to rely entirely on the credibility of the source. If the source is credible and claims that the experiment was conducted properly, then it would be reasonable to trust the results. If the source’s credibility is in question, then trust should be withheld. Assessing credibility is a matter of determining expertise and the goal is to avoid being a victim of a fallacious appeal to authority. Here is a short checklist for determining whether a person (or source) is an expert or not:

 

  • The person has sufficient expertise in the subject matter in question.
  • The claim being made by the person is within her area(s) of expertise.
  • There is an adequate degree of agreement among the other experts in the subject in question.
  • The person in question is not significantly biased.
  • The area of expertise is a legitimate area or discipline.
  • The authority in question must be identified.

 

While the experiment is the gold standard, there are times when it cannot be used. In some cases, this is a matter of ethics: exposing people or animals to something potentially dangerous might be deemed morally unacceptable. In other cases, it is a matter of practicality or necessity. In such cases, studies are used.

One type of study is the non-experimental cause to effect study. This is identical to the cause to effect experiment with one critical difference: the experimental group is not exposed to the cause by those running the study. For example, a study might be conducted of dogs who recovered from Lyme disease to see what long term effects it has on them. It would be cruel to give dogs Lyme disease to study its effects, although researchers often try to justify such cruelty in the name of progress.

The study, as would be expected, runs in the same basic way as the experiment and if there is a statistically significant difference between the two groups (and it has been adequately conducted) then it is reasonable to make the relevant inference about the effect of the cause in question.

While useful, the study is weaker than the experiment. This is because those conducting the study must take what they get as the experimental group is already exposed to the cause and this can create problems in properly sorting out the effect of the cause in question. As such, while a properly run experiment can still get erroneous results, a properly run study is even more likely to have issues.

A second type of study is the effect to cause study. It differs from the cause to effect experiment and study in that the effect is known but the cause is not. Hence, the goal is to infer an unknown cause from the known effect. It also differs from the experiment in that those conducting the study obviously do not introduce the cause.

This study is conducted by comparing the experimental group and the control group (which are ideally, as similar as possible) to sort out a likely cause by considering the differences between them. As would be expected, this method is less reliable than the others since those doing the study are trying to backtrack from an effect to a cause. If considerable time has passed since the suspected cause, this can make the matter even more difficult to sort out. The conducting the study also must work with the experimental group they happen to get and this can introduce complications into the study, making a strong inference problematic.

An example of this would be a study of elderly dogs who suffer from paw knuckling (the paw flips over so the dog is walking on the top of the paw) to determine the cause of this effect. As one might suspect, finding the cause would be challenging as there would be a multitude of potential causes in the history of the dogs ranging from injury to disease. It is also likely that there are many causes in play here, and this would require sorting out the different causes for this same effect. Because of such factors, the effect to cause study is the weakest of the three and supports the lowest level of confidence in its results even when conducted properly. This explains why it can be so difficult for researchers to determine the causes of many problems that, for example, elderly dogs suffer from.

In the case of Isis, the steroids that she was taking were well-studied, so it was quite reasonable for me to believe that they were a causal factor in her remarkable but all too brief recovery. I do not, however, know for sure what caused her knuckling as there are so many potential causes for that effect. However, the important thing is that she was able to walk normally about 90% of the time and her tail was back in the air, showing that she was a happy husky.

As mentioned in my previous essay, Isis (my Siberian husky) fell victim to the ravages of time. Once a sprinting blur of fur, she was reduced to sauntering. Still, lesser beasts feared her (and to a husky, all creatures are lesser beasts) and the sun was warm in the backyard, so her life was good even at the end.  

Faced with the challenge of keeping her healthy and happy, I relied a great deal on what I learned as a philosopher. As noted in the preceding essay, my philosophical skills kept me from falling victim to the post hoc fallacy and the fallacy of anecdotal evidence. In this essay I will focus on two basic, but extremely useful methods of causal reasoning.

One of the most useful tools for causal reasoning is the method of difference. This method was famously developed by the philosopher John Stuart Mill and has been a staple in critical thinking classes since before my time. The purpose of the method is figuring out the cause of an effect, such as a husky suffering from a knuckling paw (a paw that folds over, so the dog is walking on the top of the foot rather than the bottom). The method can also be used to try to sort out the effect of a suspected cause, such as the efficacy of an herbal supplement in treating canine arthritis.

Fortunately, the method is simple. To use it, you need at least two cases: one in which the effect has occurred and one in which it has not. In terms of working out the cause, more cases are better, although more cases of something bad (like arthritis pain) would be undesirable from other standpoints. The two cases can involve the same individual at different times as it need not be different individuals (though it also works in those cases as well). For example, when sorting out Isis’ knuckling problem the case in which the effect occurred was when Isis was suffering from knuckling and the case in which it did not was when Isis was not suffering from this problem. I also investigated other cases in which dogs suffered from knuckling issues and when they did not.

The cases in which the effect is present and those in which it is absent are then compared to determine the difference between the cases. The goal is to sort out which factor or factors made the difference. When doing this, it is important to keep in mind that it is easy to fall victim to the post hoc fallacy and conclude without adequate evidence that a difference is a cause because the effect occurred after that difference. Avoiding this mistake requires considering that the “connection” between the suspected cause and the effect might be a coincidence. For example, Isis ate some peanut butter the day she started knuckling, but it is unlikely that had any effect, especially since she ate peanut butter since we became a pack. It is also important to consider that an alleged cause might be an effect caused by a factor that is also producing the effect one is concerned about. For example, a person might think that a dog’s limping is causing knuckling, but they might both be effects of a third factor, such as arthritis or nerve damage.

You must also keep in mind the possibility of reversed causation, which is when the alleged cause is the effect. For example, a person might think that limping is causing knuckling, but it might turn out that the knuckling is the cause of the limping.

In some cases, sorting out the cause can be easy. For example, if a dog slips and falls, then has trouble walking, the most likely cause is the fall. But it could still be something else. In other cases, sorting out the cause can be difficult. It might be because there are many possible causal factors. For example, knuckling can be caused by many things (even Lyme disease). It might also be because there are no clear differences (such as when a dog starts limping with no clear preceding event). One useful approach is to do research using reliable sources. Another, which is a good idea with pet problems, is to refer to an expert, such as a vet. Medical tests, for example, are useful for sorting out the differences and finding a likely cause.

The same basic method can also be used in reverse, such as determining the effectiveness of a dietary supplement for treating canine arthritis. For example, when Isis started slowing down and showing signs of soreness, I started giving her senior dog food, glucosamine and extra protein. What followed was an improvement in her mobility and the absence of soreness. While the change might have been a mere coincidence, it is reasonable to consider that one or more of these factors helped her. After all, there is some scientific evidence that diet can have an influence on these things. From a practical standpoint, I decided to keep to this plan since the cost of the extras is low, they have no harmful side effects, and there is some indication that they work. I did consider that I could be wrong. Fortunately, I did have good evidence that the steroids Isis was prescribed worked as she made a remarkable improvement after starting them and there is solid scientific evidence that they are effective at treating pain and inflammation. As such, it is rational to accept that the steroids were the cause of her improvement, though this could also be a coincidence.

The second method is the method of agreement. Like difference, this requires at least two cases. Unlike difference, the effect is present in all cases. In this method, the cases exhibiting the effect (such as knuckling) are considered to find a common thread in all the cases. For example, each incident of knuckling would be examined to determine what they all have in common. The common factor (or factors) that is the most plausible cause of the effect is what should be taken as the likely cause. As with the method of difference, it is important to consider such factors as coincidence to avoid falling into a post hoc fallacy.

The method of agreement is most often used to form a hypothesis about a likely cause. The next step is, if possible, to apply the method of difference by comparing similar cases in which the effect did not occur. Roughly put, the approach would be to ask what all the cases have in common, then determine if that common factor is absent in cases in which the effect is also absent. For example, a person investigating knuckling might begin by considering what all the knuckling cases have in common and then see if that common factor is absent in cases in which knuckling did not occur.

One of the main weaknesses of these methods is that they tend to have very small sample sizes, sometimes just one individual, such as my husky. While these methods are quite useful, they can be supplemented by general causal reasoning in the form of experiments and studies, which is the subject of the next essay in this series.

My Siberian husky, Isis, joined the pack in 2004 at the age of one. It took her a little while to realize that my house was now her house. She set out to chew all that could be chewed, presumably as part of some sort of imperative of destruction. Eventually, she came to realize that she was chewing her stuff. More likely, joining me on 16-mile runs wore the chew out of her.

As the years went by, we both slowed down. Eventually, she could no longer run with me (despite my slower pace) and we went on slower adventures. One does not walk a husky; one adventures with a husky. Despite her advanced age, she remained active. After one adventure, she seemed slow and sore. She cried once in pain but then seemed to recover. Then she got worse, requiring a trip to the emergency veterinarian.

The  x-rays showed no serious damage, just an indication of the wear and tear of age. She also had some unusual test results, perhaps indicating cancer. Because of her age, the main concern was with her mobility and pain. If she could get about and be happy, then that was what mattered. She was prescribed medications, and a follow up appointment was scheduled with the regular vet. By then, she had gotten worse in some ways, and her right foot was “knuckling” over, making walking difficult. This is often a sign of nerve issues. She was prescribed steroids and had to go through a washout period before starting the new medicine. As might be imagined, neither of us got much sleep during this time.

For a while the steroids worked and she could go on slow adventures and enjoy basking in the sun while watching the birds and squirrels, willing the squirrels to fall from the tree and into her mouth.

While philosophy is often derided as useless, it was very helpful to me during this time and I decided to write about this usefulness as both a defense of philosophy and, perhaps, as something useful for others who face similar circumstances with an aging canine.

Isis’ emergency visit was focused on pain management and one drug she was prescribed was Carprofen (more infamously known by the name Rimadyl). Carprofen is an NSAID that is supposed to be safer for canines than those designed for humans (like aspirin) and is commonly used to manage arthritis in elderly dogs. Being curious and cautious, I researched all the medications. I ran across forums which included people’s sad and often angry stories about how Carprofen killed their pets. The typical story involved what one would expect: a dog was prescribed Carprofen and then died or was found to have cancer shortly thereafter. I found such stories worrisome and was concerned as I did not want my dog to be killed by her medicine. But I also knew that without medication, she would be in terrible pain and unable to move. I wanted to make the right choice for her and knew this would require making a rational decision.

My regular vet decided to go with the steroid option, one that also has the potential for side effects and there were horror stories on the web. Once again, it was a matter of choosing between the risks of medication and the consequences of doing without. In addition to my research into medication, I also investigated various other options for treating arthritis and pain in older dogs. She was already on glucosamine (which might not be beneficial, but seems to have no serious side effects), but the web poured forth an abundance of options ranging from acupuncture to herbal remedies. I even ran across the claim that copper bracelets could help pain in dogs. They cannot.

While some alternatives had been subject to scientific investigation, most discussions involved a mix of miracles and horror stories. One person might write glowingly about how an herbal product brought his dog back from death’s door while another might claim that the same product killed his dog. Sorting through all these claims, anecdotes and studies turned out to be a lot of work. Fortunately, I had numerous philosophical tools that helped, specifically of the sort where it is claimed that “I gave my dog X, then he got better (or died) and X was the cause.” Knowing about two common fallacies is very useful in these cases.

The first is what is known as Post Hoc Ergo Propter Hoc (“after this, therefore because of this”).  This fallacy has the following form:

 

Premise: A occurs before B.

Conclusion: Therefore, A is the cause of B.

 

This fallacy is committed when it is concluded that one event causes another just because the alleged cause occurred before the alleged effect. More formally, the fallacy involves concluding that A causes or caused B because A occurs before B and there is not sufficient evidence to warrant such a claim.

While cause does precede effect (at least in the normal flow of time), proper causal reasoning involves sorting out whether A occurring before B is just a matter of coincidence or not. In the case of medication involving an old dog, it could be a coincidence that the dog died or was diagnosed with cancer after the medicine was administered. That is, the dog might have died anyway or might have already had cancer. Without a proper investigation, simply assuming that the medication was the cause would be an error. The same holds true for beneficial effects. For example, a dog might go lame after a walk and then recover after being given an herbal supplement. While it would be tempting to attribute the recovery to the herbs, they might have had no effect at all. After all, lameness often goes away on its own or some other factor might have been the cause.

This is not to say that such stories should be rejected out of hand, but they should be approached with due consideration that the reasoning involved is post hoc. In concrete terms, if you are afraid to give your dog medicine she was prescribed because you heard of cases in which a dog had the medicine and then died, you should investigate more (such as talking to your vet) about whether there is a risk of death. As another example, if someone praises an herbal supplement because her dog perked up after taking it, then you should see if there is evidence for this claim beyond the post hoc situation.

Fortunately, there has been considerable research into medications and treatments that provide a basis for making a rational choice. When considering such data, it is important not to be lured into rejecting data by the seductive power of the Fallacy of Anecdotal Evidence.

This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy is also committed when someone rejects reasonable statistical data supporting a claim in favor of a single example or small number of examples that go against the claim. The fallacy is considered by some to be a variation on hasty generalization.  It has the following forms:

 

Form One

Premise: Anecdote A is told about a member (or small number of members) of Population P.

Conclusion: Claim C is made about Population P based on Anecdote A.

 

For example, a person might hear anecdotes about dogs that died after taking a prescribed medication and infer that the medicine is likely to kill dogs.

 

Form Two

Premise 1: Reasonable statistical evidence S exists for general claim C.

Premise 2:  Anecdote A is presented that is an exception to or goes against general claim C.

Conclusion: General claim C is rejected.

 

For example, statistical evidence shows that the evidence that glucosamine-chondroitin can treat arthritis is, at best, weak. But a person might tell a story about how their aging husky “was like a new dog” after she started taking it. supplement. To accept this as proof that the data is wrong would be to fall for this fallacy. That said, I did give my husky glucosamine-chondroitin because it is affordable, has no serious side effects and might have some benefit. I am fully aware of the data and do not reject it, I gambled that it might have done her some good.

The way to avoid becoming a victim of anecdotal evidence is to seek reliable, objective statistical data about the matter in question (a credible vet would be a good source). This can be a challenge when it comes to treatments for pets. In many cases, there are no adequate studies or trials that provide statistical data and only anecdotal evidence is available. One option is, of course, to investigate the anecdotes and try to do your own statistics. So, if most anecdotes indicate something harmful (or something beneficial) then this would be weak evidence for the claim. In any case, it is wise to approach anecdotes with due care  as a story is not proof.

The United States has settled into a post-shooting ritual. When a horrific shooting makes the news, many people offer some version of this prayer: “Oh God, let the shooter be one of them and not one of us.” Then people speculate about the identity of the shooter. In most cases the next step is that the Republicans offer thoughts and prayers while the Democratics talk about wanting to pass new gun control laws, if only they could win more elections. The final step is forgetting about that shooting when the next one occurs. My focus in this essay is on the speculation phase.

One of the most recent shootings is the attack on a Mormon church in Michigan which resulted in four people dying in the church and the attacker being killed by the police. As soon as the attack made the news, speculation began on the identity and motives of the shooter. Laura Loomer seemed to claim that the shooter was a Muslim acting as part of a broader plan while Donald J. Trump asserted that it appeared to be a targeted attack on Christians. And, of course, social media was awash with speculation. As this is being written, the suspect has been identified as 40-year-old Thomas Jacob Sanford. He is believed to be a military veteran and there is some evidence he held anti Mormon views. There is currently no evidence that he was Muslim. The investigation is ongoing, but the speculation continues.

In terms of why people speculate so quickly and without much (if any) evidence, there are various psychological reasons. I will leave those to the psychologists. There are also some practical reasons that connect to critical thinking, so I will briefly discuss those.

One practical reason to speculate immediately and even claim to know the identity and motives of the shooter is to generate clicks and hence income. One recent example of this is when 77 year old Michael Mallinson, a retired banker living in Toronto, was falsely claimed to be Charlie Kirk’s killer by an account pretending to be a Fox News outlet. Whoever was behind it also claimed he was a registered Democrat, which suggests they had some understanding of their targets. This example, and others like it, shows the importance of confirming a source as credible before accepting a claim. While one outlet might scoop a story, if it is credible, then other news outlets will run it as well and thus it is also wise to see if a claim is confirmed by other credible sources. There is, of course, the obvious problem that there has been a longstanding war against credible media outlets and that we are awash in misinformation and disinformation.  

While people can speculate in good faith (believing what they claim), there can be bad faith speculation intended to get an ideological narrative out there as soon as possible. This is because what is claimed first can often establish itself as plausible and then resist efforts to debunk it if it turns out to be false.

 Such false claims also provide others with “evidence” that they can use later when making their own false claims. For example, I regularly see people posting the false claim that many mass shooters are trans people, despite this being obviously untrue. As “evidence” people often post images of other posts making a false claim about a shooter’s identity. In some cases, people are acting in a form of good faith: they are being duped and wrongly think they are making true claims. For people who want to believe true things, a wise approach is to confirm whether a claim is true by seeking out multiple credible sources. But there is the obvious problem that people are often locked into ideological bubbles and what they see as credible sources are heavily biased or even dedicated spreaders of disinformation. There are also those who act in bad faith, posting claims about the identity and motives of shooters they know are false and using other untruths as “evidence” in order to advance their agenda, even if that is just to troll and trigger.

It is, of course, tempting to speculate about the identity and motives of shooters. While it might seem reasonable to draw inferences from such things as the target of the shooting, such speculation is still just speculation. For example, Trump speculated that the shooting might have been a targeted attack on Christians because the shooter attacked a church. As noted above, there is now some evidence that Trump was somewhat right: the attack might have been motivated by the shooters alleged dislike of Mormons. As this is being written, the religious beliefs of the shooter are unknown, but the United States does have a history of Christian Anti-Mormonism. When Mitt Romney was running for President, I (an Episcopalian) had to argue that Mormons are Christians. As such, any inferences about the shooter’s religious beliefs would be drawn from very thin evidence. The shooter could be a Christian who detested Mormons; but this is just speculation.

From a critical thinking and moral standpoint, the rational and ethical thing to do is to not speculate about a shooter’s identity and motives in public (such as posting on social media). Leave the investigation to the professionals and wait for adequate evidence to become available. This applies whether one is a pundit, a president or just a random person like me. People do, of course, have the right to speculate but rights should always be exercised with prudence and moral restraint.

As this is being written, the story of the stalled escalator is making international news. The gist of the tale is that an escalator at the United Nations building came to a sudden stop just as Trump and the First Lady began their journey upwards. The UN claims that a White House videographer accidentally tripped a safety system, stopping the mechanism. Aside from Trump and Melania getting in some unexpected cardio, nothing happened. While this event might seem utterly insignificant, it provides an excellent and absurd example of the state of American politics.

Some on the right rushed to present a narrative of a sinister plot against Trump, suggesting that it was a deliberate attempt to harm Trump or perhaps even set him up for an assassination attempt. While Trump initially seemed to laugh off the escalator incident, he is now calling for arrests in the wake of what some in the media are calling “escalatorgate.” Fox News personality Jesse Watters jokingly (one hopes) suggested blowing up or gassing the U.N. in retaliation. While all this might strike rational people as nonsense, it is philosophically interesting in terms of critical thinking, epistemology and ethics. In this essay I’ll briefly look at some of these aspects.

In causal explanations it is usually wisest to follow the popular conception of Occam’s Razor and go with the simplest explanation. In the case of the escalator, the simplest explanation is the stated one: someone tripped a safety mechanism. If someone intended to harm the President, rigging an escalator would be both needlessly complicated and extremely unlikely to cause any meaningful harm. Times being what they are, I am obligated to state unequivocally that I condemn any efforts to harm the President or anyone else with escalator sabotage. But there are reasons why someone might claim something sinister occurred and other reasons why someone might believe it. I make this distinction because people can obviously make claims they do not believe.

While there are various psychological reasons why the claim might be made, there are some “good” practical reasons to claim a sinister plot. One is to create a distraction that will take attention from other topics, such as economic woes and the Epstein files. Trump and his allies have turned this into an international story, and I have been drawn in to do my part. However, my point is that this should not be an important story. The second is to energize the base with an “example” of how “they” are out to get Trump. The third is that it provides a pretense for Trump to go after the U.N.. But why would anyone believe that there is something sinister going on?

We humans tend to attribute human motivations or intentions to objects or natural phenomena and this gives rise to what we philosophers call the anthropomorphic fallacy. While Trump and his supporters are not making this mistake about the escalator, they could be committing a similar error: they are inferring without adequate evidence that an accidental event was caused by sinister intentions. This “reasoning” involves rejecting the accident explanation in favor of the sinister intention explanation based on psychological factors rather than evidence. That is, Trump and his supporters probably feel that there is a sinister conspiracy against him, so accidents and coincidences are explained in terms of this conspiracy because the explanation feels right. And if the conspiracy theory is questioned, the questioner is accused of being in on the conspiracy. Other accidents and coincidences are also offered as “evidence” that this specific accident or coincidence is part of the conspiracy. It might be objected that people really have tried to hurt Trump, such as occurred with the two failed assassinations attempts (that I also condemn). While those do serve as evidence that those two people wanted to harm Trump, they have no relevance to the escalator incident and evidence in support of the escalator conspiracy in particular would be needed.

Another reason why some people might believe this is based in the claim about the right that “every accusation is a confession.” While there are various ways to explain this, a plausible one in some cases is the false consensus effect cognitive bias. This occurs when people assume that their personal qualities, characteristics, beliefs, and actions are relatively widespread through the general population. People who might themselves think of sabotaging an escalator to harm someone they dislike would be inclined to believe other people think like them, just as a liar would tend to think other people are also dishonest. Times being what they are, I must clarify that I condemn using escalators to harm people and I am not accusing anyone on the right of planning to do this. This is but a hypothesis about why some people might believe the elevator was sabotaged. Lastly, I’ll take a brief look at an ethical issue of free expression.

As noted above, Jesse Watters joked about bombing the U.N. in retaliation for the escalator. As I am a consistent advocate of free expression, I believe he has the moral right to say this although it would be morally acceptable for him to face any relevant proportional moral consequences. Times being what they are, I must be clear that I do not condone any attempts to harm Watters or even firing him over this. But his remarks are another example of the apparent moral inconsistency of the right, with Brian Kilmeade’s assertion that we should consider executing mentally ill homeless people being the most extreme example to date. Kilmeade had to apologize but faced no meaningful consequences.

After the brutal murder of Charlie Kirk, many on the right rushed to punish those who spoke ill of Kirk, with Watters himself calling for Matthew Dowd to be fired. There was also the suspension of Jimmy Kimmel after alleged intimidation by Trump’s FCC. Less famous people have also been fired, with Vice President Vance urging people to report criticism of Kirk to get these critics fired. This is but one of many examples showing that folks on the right either do not believe in free expression or define the right of free expression as only allowing what they want to express and hear. While this is moral inconsistency, it can be an effective strategy since it allows them the pretense of ethics without the inconvenience of being ethical.

 

The American right is partially defined by its embracing debunked conspiracy theories such as the big lie about the 2020 election and those involving all things COVID. While some conspiracy theories are intentionally manufactured by those who know they are untrue (such as the 2020 election conspiracy theories) other theories  might start by people being bad at reading things correctly. For example, consider the claim that there were microchips in the COVID vaccines because of Bill Gates.

The Verge does a step-by-step analysis of how this conspiracy theory evolved, which is an excellent example of how conspiracy claims arise, mutate, and propagate. The simple version is this: in a chat on Reddit, Gates predicted that people would have a digital “passport” of their health records. Some Americans who attended K-12 public schools have already used a paper version of this.  I have my ancient elementary school health records, which I recently consulted to confirm I had received my measles booster as a kid. As this is being written, measles has returned to my adopted state of Florida. The idea of using tattoos to mark people when they are vaccinated has also been suggested as a solution to the problem of medical records in places where record keeping is spotty or non-existent.

Bill Gate’s prediction was picked up by a Swedish website focused on biohacking which proposed using an implanted chip to store this information. This is not a new idea for biohackers or science fiction, but it was not Gate’s idea. However, the site used the untrue headline, “Bill Gates will use microchip implants to fight coronavirus.” As should surprise no one, the family tree of the conspiracy leads next to my adopted state of Florida.

Pastor Adam Fannin of Jacksonville read the post and uploaded a video to YouTube. The title is “Bill Gates – Microchip Vaccine Implants to fight Coronavirus,” which is an additional untruth on top of the untrue headline from the Swedish site. This idea spread quickly until it reached Roger Stone. The New York Post ran the headline “Roger Stone: Bill Gates may have created coronavirus to microchip people.”

Those familiar with telephone might see this as a dangerous version as each person changes the claim until it has almost no resemblance to the original. Just as with games of telephone, it is worth considering that people intentionally made changes. In the case of a game of telephone, the intent is to make the final version funny. In the case of conspiracy theories, the goal is to distort the original into the desired straw man. In the case of Bill Gates, it started out with the innocuous idea that people would have a digital copy of their health records and ended up with the claim that Bill Gates might have created the virus to put chips in people. In addition to showing how conspiracy claims can devolve from innocuous claims, this also provides an excellent example of how conspiracy theories sometimes do get it right that we should be angry at someone or something but get the reasons why we should be angry wrong.

While there is no good evidence for the conspiracy theories about Gates and microchips, it is true that we should be angry at Bill Gate’s COVID wrongdoings. Specifically, Gates used his foundation to impede access to COVID vaccines. This was not a crazy supervillain plan; it was “monopoly medicine.” As such, you should certainly loath Bill Gates for his immoral actions; but not because of the false conspiracy theories. As an aside, it is absurd that when there are so many real problems and real misdeeds to confront, conspiracy theorists spend so much energy generating and propagating imaginary problems and misdeeds. Obviously, these often serve some people very well by distracting attention from these problems. But back to the origin of conspiracy theories.

While, as noted above, people do intentionally make false claims to give birth to conspiracy theories, it also makes sense that unintentional misreading can be a factor. Having been a professor for decades, I know that people often unintentionally misread or misinterpret content.

For the most part, when professors are teaching basic and noncontroversial content, they endeavor to prove the students with a clear and correct reading or interpretation. Naturally, there can be competing interpretations and murky content in academics, but I am focusing on the clear, simple stuff where there is general agreement and little or no opposition. And, of course, no one with anything to gain from advancing another interpretation. Even in such cases, students can badly misinterpret things. To illustrate, consider this passage from the Apology:

 

Socrates: And now, Meletus, I will ask you another question—by Zeus I will:  Which is better, to live among bad citizens, or among good ones?  Answer, friend, I say; the question is one which may be easily answered.  Do not the good do their neighbors good, and the bad do them evil?

 

Meletus: Certainly.

 

Socrates: And is there anyone who would rather be injured than benefited by those who live with him?  Answer, my good friend, the law requires you to answer— does any one like to be injured?

 

Meletus: Certainly not.

 

Socrates: And when you accuse me of corrupting and deteriorating the youth, do you allege that I corrupt them intentionally or unintentionally?

 

Meletus: Intentionally, I say.

 

Socrates: But you have just admitted that the good do their neighbors good, and the evil do them evil.  Now, is that a truth which your superior wisdom has recognized thus early in life, and am I, at my age, in such darkness and ignorance as not to know that if a man with whom I have to live is corrupted by me, I am very likely to be harmed by him; and yet I corrupt him, and intentionally, too—so you say, although neither I nor any other human being is ever likely to be convinced by you.  But either I do not corrupt them, or I corrupt them unintentionally; and on either view of the case you lie.  If my offence is unintentional, the law has no cognizance of unintentional offences: you ought to have taken me privately, and warned and admonished me; for if I had been better advised, I should have left off doing what I only did unintentionally—no doubt I should; but you would have nothing to say to me and refused to teach me.  And now you bring me up in this court, which is a place not of instruction, but of punishment.

 

Socrates’ argument is quite clear and, of course, I go through it carefully because this argument is part of the paper for my Introduction to Philosophy class. Despite this, every class has a few students who read Socrates’ argument as him asserting that he did not corrupt the youth intentionally because they did not harm him. But Socrates does not make that claim; central to his argument is the claim that if he corrupted them, then they would probably harm him. Since he does not want to be harmed, then he either did not corrupt them or did so unintentionally. This is, of course, an easy misinterpretation to make by reading into the argument something that is not there but seems like it perhaps should or at least could be. Students are even more inclined to read Socrates as claiming that the youth will certainly harm him if he corrupts them and then build an argument around this erroneous reading. Socrates claims that the youth would be very likely to harm him if he corrupted them and so he was aware that he might not be harmed.

My point is that even when the text is clear, even when someone is actively providing the facts, even when there is no controversy, and even when there is nothing to gain by misinterpreting the text, it still occurs. And if this can occur in ideal conditions (a  clear, uncontroversial text in a class), then it should be clear how easy it is for misinterpretations to arise in “the wild.” As such, a person can easily misinterpret text or content and sincerely believe they have it right—thus leading to a false claim that can give rise to a conspiracy theory. Things are much worse when a person intends to deceive. Fortunately, there is an easy defense against such mistakes: read more carefully and take the time to confirm that your interpretation is the most plausible. Unfortunately, this requires some effort and the willingness to consider that one might be wrong, which is why misinterpretations occur so easily. It is much easier to go with the first reading (or skimming) and more pleasant to simply assume one is right.

Cuphead on SteamDirect

Inuendo Studios presents an excellent and approachable analysis of the infamous Gamer Gate and its role in later digital radicalization. This video inspired me to think about manufactured outrage, which reminded me of the fake outrage over such video games as Cuphead and Doom. There was also similar rage against the She-Ra and He-Man reboots. Mainstream fictional outrage against fiction involved the Republican’s rage against Dr. Seuss being “cancelled.” Unfortunately, fictional outrage can lead to real consequences, such as death threats, doxing, swatting, and harassment. In politics fictional outrage is weaponized for political gain, widens the political divide between Americans, and escalates emotions. In short, fictional outrage at fiction makes reality worse. 

I call this fictional outrage at fiction for two reasons. The first is that the outrage is fictional: it is manufactured and based on untruths. The second is that the outrage is at works of fiction, such as games, TV shows, movies, and books. Since Thought Slime, Innuendo Studios, Shaun, and others have ably gone through examples in detail, I will focus on some of the rhetorical and fallacious methods used in fictional outrage at fiction. These methods also apply to non-fiction targets as well, but I am mainly interested in fiction here. Part of my motivation is to show that some people put energy into enraging others about make-believe things like games and TV shows. While fiction is subject to moral evaluation, it should be remembered that it is fiction. Although our good dead friend Plato would certainly take issue with my view.

While someone can generate fictional outrage by complete lies, this is usually less effective than using some residue of truth. Hyperbole is an effective tool for this task. Hyperbole is usually distinguished from outright lying because hyperbole is an exaggeration rather than a complete fabrication. For example, if someone says they caught a huge fish they would be simply lying if they caught nothing but would be using hyperbole if they caught a small fish. There can be debate over what is hyperbole and what is simply a lie. For example, when the Dr. Seuss estate decided to stop publishing six books, the Republicans and their allies claimed Dr. Seuss had been cancelled by the left. While it was true that six books would not be published, it can be argued whether saying the left cancelled them is hyperbole or simply a lie. Either way, of course, the claim is not true.

   Even if the target audience knows hyperbole is being used, it can still influence their emotions, especially if they want to believe. So, even if someone recognizes that the “wrongdoing” of a games journalist was absurdly exaggerated, they might still go along with the outrage. A person who is particularly energetic and dramatic in their hyperbole can also use their showmanship to augment its impact.

The defense against hyperbole is, obviously, to determine the truth of the matter. One should always be suspicious of claims that seem extreme or exaggerated, although they should not be automatically dismissed as extreme claims can be true. Especially since we live in a time of extremes.

A common fallacy used in fictional outrage is the Straw Man. This fallacy is committed when someone ignores an actual position, claim or action and substitutes a distorted, exaggerated, or misrepresented version of it. This fallacy often involves hyperbole. This sort of “reasoning” has the following pattern:

 

  1. Person A has position X/makes claim X/did X.
  2. Person B presents Y (which is a distorted version of X).
  3. Person B attacks Y.
  4. Therefore, X is false/incorrect/flawed/wrong.

 

This sort of “reasoning” is fallacious because attacking a distorted version of something does not constitute an attack on the thing itself. One might as well expect an attack on a drawing of a person to physically harm the person. To illustrate the way the fallacy is often used, consider what happened to start the “outrage” over Cuphead. A writer played an early version of the game badly, noted that they were doing badly, and were generally positive about the game. All this was ignored by those wanting to manufacture rage: they presented it as a game journalist condemning the game for being too hard because they are bad at games. And it escalated from there.  

The Straw Man fallacy is an excellent way to manufacture rage; one can simply create whatever custom villain they wish by distorting reality. As with hyperbole, there is the question of what distinguishes a straw man from a complete fabrication; the difference is that the Straw Man fallacy starts with some truth and then distorts it. To use the Cuphead example, if a person had never even played Cuphead or said anything about it, saying that they hated the game because they are incompetent would be a complete fabrication rather than a straw man.

Straw Man attacks tend to work because people generally do not bother to investigate the accuracy of claims they want to believe; and even if they are not already invested in the claim, checking a claim takes some effort. It is easier to just believe (or not) without checking. People also often expect others to be truthful, which is increasingly unwise.

The defense against a Straw Man is to check the facts. Ideally this would involve going to the original source or at least using a credible and objective source.

A third common fallacy used in fictional outrage is Hasty Generalization. This fallacy is committed when a person draws a conclusion about a population based on a sample that is not large enough. It has the following form:

 

Premise 1. Sample S, which is too small, is taken from population P.

Conclusion: Claim C is drawn about Population P based on S.

 

The person committing fallacy is misusing the following type of reasoning, which is known as Inductive Generalization, Generalization, and Statistical Generalization:

 

Premise 1: X% of all observed A’s are B’s.

Premise : Therefore X% of all A’s are B’s.

 

The fallacy is committed when not enough A’s are observed to warrant the conclusion. If enough A’s are observed, then the reasoning is not fallacious. Since Hasty Generalization is committed when the sample (the observed instances) is too small, it is important to have samples that are large enough when making a generalization.

This fallacy is useful in creating fictional outrage because it enables a person to (fallaciously) claim that something is widespread based on a small sample. If the sample is extremely small and it is a matter of an anecdote, then a similar fallacy, Anecdotal Evidence, can be committed. This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy is also committed when someone rejects reasonable statistical data supporting a claim in favor of a single example or small number of examples that go against the claim. The fallacy is considered by some to be a variation of Hasty Generalization.  It has the following forms:

 

Form One

Premise 1: Anecdote A is told about a member (or small number of members) of Population P.

Conclusion: Claim C is made about Population P based on Anecdote A.

 

Form Two

  1. Reasonable statistical evidence S exists for general claim C.
  2. Anecdote A is presented that is an exception to or goes against general claim C.
  3. Conclusion: General claim C is rejected.

 

People often fall victim to this fallacy because stories and anecdotes have more psychological influence than statistical data. This leads people to infer that what is true in an anecdote must be true of the whole population or that an anecdote justifies rejecting statistical evidence. Not surprisingly, people usually accept this fallacy because they prefer that what is true in the anecdote be true in general. For example, if one game journalist is critical of a game because it has sexist content, then one might generate outrage by claiming that all game journalists are attacking all games for sexist content.

A person can also combine rhetorical tools and fallacies. For example, an outrage merchant could use hyperbole to create a straw man of an author who wrote a piece about whether video game characters should be more diverse and less stereotypical. The straw man could be something like this author wants to eliminate white men from video games and replace them with women and minorities. This straw man could then be used in the fallacy of Anecdotal Evidence to “support” the claim that “the left” wants to eliminate white men from video games and replace them with women and minorities.

The defense against Hasty Generalization and Anecdotal Evidence is to check to see if the sample size warrants the conclusion being drawn. One way that people try to protect their claims from such scrutiny is to use an anonymous enemy. This is done by not identifying their sample’s members but referring to a vague group such as “those people”, “the left”, “SJWs”, “soy boys”, “the woke mob”, or whatever. If pressed for specific examples that can be checked, a common tactic is to refer to someone who has been targeted by a straw man fallacy and just use Anecdotal Evidence again. Another common “defense” is to respond with anger and simply insist that there are many examples, while never providing them. Another tactic used here is Headlining.

In this context, Headlining occurs when someone looks at the headline of an article and then speculates or lies about the content. These misused headlines are often used as “evidence”, especially to “support” straw man claims. For example, an article might be entitled “Diversity and Inclusion in Video Games: A Noble Goal.” The article could be a reasoned and balanced piece on the merits and cons of diversity and inclusion in video games. But the person who “headlines” it (perhaps by linking to it in a video or including just a screen shot) could say that the piece is a hateful screed about eliminating white men from video games. This can be effective for the same reason that the standard Straw Man is effective; few people will bother to read the article. Those who already feel the outrage will almost certainly not bother to check; they will simply assume the content is as claimed (or perhaps not care).

There are many other ways to create fictional outrage at fiction, but I hope this is useful in increasing your defense against such tactics.