When a mass shooting occurs, Republican politicians blame mental illness or video games. Placing the blame on video games makes use of an argument that dates back at least to Plato. In the Republic, Plato argues that exposure to art can have a corrupting effect, making people more likely to engage in bad behavior. While Plato focused mainly on the corrupting influence of tragedy (which could cause people to fall victim to inappropriate sadness) he also discussed the corrupting influence of fictional violence. As he saw it, exposure to fictional violence could incline people to real violence. Plato’s solution to the threat was to ban such art from his ideal city.

This argument has some appeal. People are influenced by experiences and repeated exposure to fictional violence could affect how a person feels and thinks. Exposure to non-fiction, such as hateful speech, writings and tweets could influence a person in negative ways. The critical question is whether the influence of video games can be a causal factor in violence, especially a mass shooting.

Determining whether video games are a causal factor in mass shootings involves assessing causation in a population. The challenge is showing whether there would be more mass shootings in a population if everyone played video games than if no one did. If there is a statistically significant difference, then video games can would have a causal influence on violent behavior. So, let us consider this matter.

If video games were a statistically significant causal factor for mass shootings, then we would expect to see the number of mass shootings varying with the number of video game players in a country. While the United States is a leader in both video game revenues and mass shootings, other countries also have large populations of gamers, yet do not have a corresponding level of mass shootings. As such, video games would not seem to be a significant causal factor in mass shootings.

This does not prove that video games are not a factor at all. Perhaps video games combined with other factors do cause mass shootings. So, we need to look at the differences between the United States and other countries to see what factors combine with video games to cause mass shootings. Now, If video games play a causal role in mass shootings, the question is the extent to which they have this effect.

About 67% of Americans play video games of one form or another. But the concern is not with video games but with violent video games like Call of Duty and Fortnite. While most Americans do not play these games, millions of Americans do. The overwhelming majority of people who play violent video games never become mass shooters. As such, if violent video games do have causal influence, it must be extremely limited, otherwise mass shootings would be more common.

Some politicians have tried to make use of the method of difference to argue that video games are causing mass shootings. This method involves comparing cases in which an effect has occurred to similar cases in which the effect did not occur and finding a plausible difference that could be the cause. This method is a good one but must be used with care to avoid falling into error. The gist of the argument made by these politicians is to conclude that violent video games cause mass shootings because mass shootings increased when violent video games were created. Because of the difference between before and after, video games are taken to be a causal factor.

While it is true that the number of mass shootings correlates with the number of violent video games available (both have increased over the years), correlation is not causation. After all, the number of tech startups has also increased, yet it would be absurd to conclude that they are causing mass shootings. To simply assert that since mass shootings increased as more violent video games appeared would be to commit the cum hoc fallacy, that because two things correlate, there must be a causal connection. This does not entail that violent video games do not play a role, but more is needed than mere correlation. As argued above, there seems to be no significant causal connection between violent video games and mass shootings; they merely happen to correlate as do many other things.

While blaming video games has political value, it does nothing to address the problem of mass shooting since there seems to be no meaningful causal connection between real violence and video games.

The foundation of legitimate political authority has been explored by political philosophers like Hobbes and Locke. When the thirteen American colonies revolted, they sought a foundation for political authority. While there are many views, the founders of the United States adopted a philosophy shaped by John Locke: legitimate political authority requires the consent of the governed and the majority should rule. Being aware of what Mill later called the tyranny of the majority, the founders put in place constitutional protections against oppressive incursions by the majority (and the state).

While these ideas appeal to me psychologically because of my upbringing, they also stand up well to philosophical scrutiny. As such, I accept that political legitimacy stems from the consent of the governed and that majority rule with proper protection against the tyranny of the majority is a good idea. For the sake of this essay, I will assume that these two basic principles are correct while acknowledging that they could be refuted.

Since the legitimacy of the government depends on the consent of the governed, it is essential that the governed can provide or withhold consent. As a practical matter, voting is fundamental to this consent. A citizen can also provide consent by not voting, if they are free to vote and decide against doing so. If a citizen is unjustly denied their right to vote, then their consent is not obtained. This weakens a government’s legitimacy  since the it would be extending its authority beyond the provided consent. To avoid a charge of absurdity, I must make it clear that I am not claiming that disenfranchising a single citizen destroys the legitimacy of the state. Rather, each unjustly disenfranchised citizen reduces the legitimacy of the state to that degree. I cannot specify the specific number of disenfranchised voters that would destroy the legitimacy of a state, but to require this would be to fall victim to the line drawing fallacy. But if most citizens were unjustly disenfranchised, that would be an indisputable case in which the state lost legitimacy. At levels less than this, the legitimacy of the state would be reduced proportionally to the degree of unjust disenfranchisement. Simply put, the more unjustly disenfranchised citizens, the less legitimate the state. Individual citizens who are unjustly disenfranchised can make a reasonable case that they owe little or no obedience to the state that has disenfranchised them. One can appeal to the principle of no taxation without representation.

While we praise the right to vote, the United States has a long and persistent history of unjust disenfranchisement. While the past is of interest, what is of practical concern is the present unjust disenfranchisement of citizens.

One means of unjust disenfranchisement it to use the specter of voter fraud to “justify” measures denying citizens their right to vote. While voter fraud does exist, all the evidence shows that it is incredibly rare.  To use an analogy, the obsession with voter fraud seems like a confused person who thinks Americans face a dangerous epidemic of excessive exercise and that a lack of health insurance is not a serious problem. This confused person would work hard to impose restrictions and limits on exercise while expressing no concern about insurance.  While athletic overtraining does occur, it is not a general problem and the focus should be on the lack of health insurance. Likewise for voter fraud and voter suppression: voter fraud does occur, but the real problem is voter suppression.

There is also the fact that the methods used are often ineffective against the sort of fraud that does occur. These methods are more effective at disenfranchising voters, especially narrowly targeted voters. One example is the Republican’s voter ID law in North Dakota that requires voters to have an ID that shows a street address. Many native American voters live in rural areas and have PO boxes rather than street addresses and they are now trying to get new IDs that meet the requirement of the law. In terms of why the law exists, it is not because there was an epidemic of fraudulent voting by people using government IDs that lack street addresses. Rather, it is because Democratic Senator Heidi Heitkamp won her election by less than 3,000 votes in 2012. 80% of majority-Native counties voted for her, so suppressing their votes could have resulted in a Republican victory. This law will also impact other citizens.

Another example of voter suppression is disenfranchising felons. While felon disenfranchisement impacts Republican and Democratic voters (Trump is a convicted felon), it is seen as impacting Democrats more, which explains why Republicans tend to favor it.

There are other ways in which citizens are unjustly disenfranchised, most of which are the result of strategies of the Republican party. It might be countered that I and the Democrats are only concerned about voter suppression because the voters being targeted are more likely to vote for Democrats. One might go beyond this and claim that I and the Democrats would be fine with the suppression of Republican voters. One might point to how Democrats engage in gerrymandering and other political trickery, perhaps even their own version of voter suppression.

My reply is that I cannot speak for other Democrats; but I can speak for myself. My view is that voter suppression is wrong regardless of who is being unjustly suppressed. As such, if the Democrats engage in voter suppression, I condemn that as strongly as I condemn voter suppression by Republicans. Or anyone, for that matter. While I would generally prefer that a Democrat win (if only from the pure self-interested fact that Democrats tend to be much friendlier to education and more pro-environment than Republicans), I would rather lose an election fairly than win through voter suppression. This is because, as noted above, voter suppression reduces the legitimacy of the state by robbing citizens unjustly of their opportunity to consent. In a nation that professes to be a democracy (yes, I know that it has a republican system at most levels) to rob citizens unjustly of their right to vote is a crime of the highest order. This is because it denies the foundational right of the citizens of a democracy and damages democracy itself. As such, voter suppression is treason, plain and simple.

 

In the previous essay in this series, I presented the argument by elimination and ended with a promise to address how to assess the competition between explanations. The overall method of elimination in this context can be presented in the following form:

 

Premise 1: There are X (some number) explanations for Y (some phenomenon).

Premise 2: E (an explanation) is the best of the X explanations.

Conclusion: E is (probably) correct.

 

Sorting out the second premise involves “scoring” each explanation and then comparing these scores to see which one does the best. As noted in the previous essay, to the degree that there are reasonable doubts that all plausible explanations have been considered, there are reasonable doubts that the correct explanation has been found. But the focus of this essay is on the competition.

While the scoring metaphor is useful, scoring explanations is not exact and admits of some subjectivity. As such, reasonable people can reasonably disagree about the relative ranking of explanations. That said, there are objective standards used in assessing explanations.

Conspiracy theorists often use this method and argue their theory best explains the facts. However, problems often arise when all the standards for assessing explanations are applied.

There are obvious defects that any explanation needs to avoid to be a good explanation. At a minimum, an explanation needs to avoid being too vague (lacking adequate precision), ambiguous (having two or more meanings when it is not clear what is intended), and circularity (merely restating what is to be explained). These are minimal standards because an explanation that cannot meet them is not worth considering. For example, if an explanation is too vague, one does not even know what it is saying. There are other standards as well.

One standard is that an explanation needs to be consistent with established fact and theory. As would be imagined, conspiracy theories will almost by definition fail to meet this standard. For example, the conspiracy theory that NASA faked the moon landing goes against established fact. As another example, the view that the earth is flat goes against established fact and theory.

When faced with this standard, conspiracy theorists will often point out that some now established facts and theories were once inconsistent with the facts and theories of an earlier time. On the one hand, they are right to point out that old facts and theories have been overturned and thus this standard is not decisive. On the other hand, the fact that it has occurred in other cases does not prove that a specific conspiracy theory is thus proven. To use an analogy, while it is true that some criminal convictions have been overturned, this does not entail that a specific person is thus innocent. Overturning established fact and theory requires showing that they have defects serious enough to warrant their overthrow—merely pointing out that it has happened does not show that it will happen in any particular case. When explanations compete, the explanation that better matches established fact and theory is better—unless compelling reasons can be given to overturn them.

A second standard is that an explanation needs to keep it simple. This involves avoiding unnecessary assumptions and needless complexities. The famous Occam’s Razor falls under this standard, with the enjoinder to not multiply entities beyond necessity. For example, explaining the phenomenon of night terrors in physiological terms as opposed to invoking demons or witches is simpler and hence better. As another example, those who favor evolution over creation contend that the theory of evolution explains everything that the creation explanation explains but has the advantage of not postulating God. As a third example, faking the moon landing in the 1960s would have required far more advanced technology than was available at the time as well as a global conspiracy between competing nations. The simpler explanation is that the landings took place. As the examples illustrate, explanations compete in terms of simplicity: all other things being equal, the simpler explanation is better.

Explanations can become more complicated as they deal with problems or objections. This need not be a fatal problem if the increased complication is warranted. In other cases, the increased complexity is ad hoc and serves primarily to try to save the explanation from criticism in an unprincipled way. This typically involves presenting more explanations to account for the problems that arise for the original explanation. For example, when experiments  show that the earth is not flat, flat-earthers try to explain these failures by using some new factor(s) such as a previously unknown type of energy that affects gyroscopes. When challenged, they can say that this is an accepted method in science: almost all explanations are modified as complications arise. The challenge, then, is sorting out what is a legitimate modification in the face of a complication and what is an ad hoc attempt to save the explanation by bringing in new entities or complexities. This leads to what might be the most important standard, that of testability.

If an explanation gets it right, then it should yield predictions that turn out to be true. These predictions need to be testable, otherwise there is no way to know whether the explanation is correct. As such, if an explanation produces predictions that cannot be tested, then that is a problem for establishing its correctness: it might be correct, but we cannot know. If an explanation yields predictions that are tested and turn out to be false, then that is a problem for the theory—but this need not be fatal. As noted above, an explanation can be modified in the face of failure to account for that failure. This should yield a new prediction that can be tested. If the prediction turns out to be true, that is a plus for the explanation. As would be suspected, explanations compete in terms of explanatory power: all other things being equal, the explanation that yields better predictions is better. If the new prediction turns out to be false, then the explanation can be modified again to yield another prediction for testing. For example, if a new type of energy is postulated to explain how gyroscopes work, then predictions need to be made and tested for this energy. If it is claimed that the prediction is that gyroscopes would work the way they do and thus the energy has been shown to be real, then this would seem to be reasoning in a circle. As would be suspected, this is where conspiracy theories often hit the rocks: they advance explanations that yield false predictions and then modify the explanations, which then yield false predictions. They then modify then explanations, which yield more false predictions and so on. The problem does not lie with the basic method: as noted above, modifying explanations in a principled way in the face of findings is a legitimate method. The problem is that the proponent of the explanation simply refuses to accept testability—nothing can refute their explanation because they will simply modify it to respond to every failure. 

It might be objected that such persistence is a good thing—that if the great thinkers of the past gave up at the first failure of their predictions, we would not be where we are today. While there is much to be said of persistence, there is a point at which the proponent is simply refusing to accept testability—nothing can ever refute their explanation. But this makes the theory meaningless as it becomes useless as an explanation. To use a silly analogy, consider the invisible unicorn.

When I was in grade school, a kid told us they had a unicorn. Being kids, we doubted this but really wanted to see it. The unicorn kid claimed that we could not see it because it was invisible. A smart kid pointed out that we should be able to hear it. But unicorn kid said their unicorn was silent. Then someone said that we should be able to touch it or see its prints on the ground. To which unicorn kid said it was too quick and was flying. And so on, for every test that would prove (or disprove) the unicorn. While we might not be able to draw an exact line at which an explanation starts becoming an invisible unicorn, once it reaches that zone the game is over.  As would be guessed, conspiracy theories often end up in the land of invisible unicorns.

Photo by ubahnverleih

As discussed in the previous essays on this subject, conspiracy theorists often use the methods of critical thinking to support and defend their theories. One method, which is a core component of scientific reasoning, is the inference to the best explanation. As the name suggests, this reasoning aims at finding the best explanation and this typically involves pitting competing explanations against each other until the best emerges.

This reasoning can be seen as a version of the argument by elimination. This argument has two basic forms. One version is the extermination method in which the goal is to show that something cannot be the case. The idea is to present all possible options, refute all of them and then conclude a total elimination. As an example, Kant used this method to argue that the existence of God cannot be proven (and that it could not be disproven). His reasoning was as follows:

 

Premise 1: There are only three possible ways to prove the existence of God: the teleological argument, the ontological argument and the cosmological argument.

Premise 2: None of these arguments can succeed in proving the existence of God.

Conclusion: There is no way to prove the existence of God.

 

While this is a valid deductive argument (if all the premises are true, then the conclusion must be true), showing that it is also sound (valid plus all true premises) is the real challenge. Doing so requires showing that there are only three ways to prove God’s existence and that they all must fail.

Since this method aims at total elimination, it is only useful in this context when trying to argue that no explanation is possible.

The second version is like a marathon: the competition runs until one victor emerges from the pack. In its simplest form (which has but two options), it can be presented as a disjunctive syllogism:

 

Premise 1:  P or Q

Premise 2: Not Q

Conclusion: P

 

It can also be expanded to include potentially infinite options:

 

Premise 1: P or Q or R or …

Premise 2: Not Q and Not R and Not…

Conclusion: P

 

This sort of reasoning is often used in mystery/crime stories: if there are only five possible suspects and one of them did it, then elimination four of them will reveal the culprit.  This presentation can be misleading, however. While the logic is valid, to avoid committing the fallacy of false dilemma it must be the case that the two (or more) options that are presented are the only viable options. To the degree that other options remain a possibility, the truth of the first premises remains in doubt.

Conspiracy theorists (and many others) sometimes make the mistake of falling into a false dilemma when they claim that their refutation of their main competitor(s) proves their theory. For example, a flat-earther might reason like this:

 

Premise 1: The earth is flat or the earth is a sphere.

Premise 2: The earth is not a sphere.

Conclusion: The earth is flat.

 

The obvious problem is that while the best-known earth shapes are spherical and flat, this does not entail that those are the only options. There are, after all, many other shapes in geometry and the flat option only wins by elimination when all those shapes have also been eliminated.

It is at this point that a skeptic can argue that one can never be sure that all the options have been considered, so one can never know that the right explanation has been found. After all, the skeptic can say, the right explanation might not even be in the competition. This fact is sometimes used by conspiracy theorists to cast doubts on an accepted explanation. This explanation might be the best among the known explanations but is not the true explanation. Unfortunately for the conspiracy theorist, this same doubt also applies to their conspiracy theory, so they need more than skepticism to support their own explanation.

While the skeptic might be right about the impossibility of certainty, it is still possible to hold the competition between the known explanations while keeping in mind that alternatives have been missed. But the mere fact that there could be missed alternatives does not itself show that a good explanation has not been found. To use an analogy, think of a career. While there might be a better match for a person out there, this does not entail that their current career is not a good one (or even the best). After all, the career can be assessed by various standards and against the known alternatives. The same holds for explanations. So, while the possibility of unknown explanations should be kept in mind, their mere possibility should not be taken as refuting an explanation.

 The second challenge is that of establishing the second premise—eliminating the competition.

To the degree that the elimination of the other explanations is in doubt, the truth of the second premise remains in doubt. This leads to the matter of how explanations compete, which is the subject of the next essay in this series.

 

As noted in the previous essays in the series, people who believe in conspiracy theories can use good methods of argumentation to establish their claims. As such, it would be an error to simply dismiss such folks as automatically being irrational or illogical. In this essay I will briefly look at how the argument by example can be used to support a conspiracy theory and how to assess such reasoning to avoid accepting fallacies.

An argument by example is, obviously enough, when one tries to support a conclusion by presenting examples. It has the following form, although people generally present it informally:

 

           Premise 1: Example 1 is an example that supports claim P.

           Premise n: Example n is an example that supports claim P.

           Conclusion: Claim P is true.

 

In this case n is a variable standing for the number of the premise in question and P is a variable standing for the claim under consideration. To use a non-conspiracy example, a politician might argue that they are competent in foreign policy by giving examples of their success in this area.

There are many ways this argument can be used in conspiracy theories. One is to argue for the existence of conspiracies in general by providing examples that purport to show that conspiracies do occur. For example, a Flat Earther might try to prove that it is reasonable to believe that supposedly proven science can be a hoax or conspiracy by giving examples of such occurrences (such as the Piltdown Man hoax).

While this approach is a legitimate use of the argument, to conclude from establishing the general claim that there have been conspiracies to a specific conspiracy theory being true is bad logic. To use an analogy, consider counterfeit art. It is easy to find many examples of counterfeit art, and this supports the conclusion that art has been counterfeited. But it would not follow that a specific work of art, such as the Mona Lisa, was a counterfeit.

 The second method is to argue for a specific conspiracy theory by presenting examples that support the theory. For example, someone who believes the Illuminati run the world could present examples of what they think is the Illuminati in action and conclude their theory is correct. The question is whether the examples adequately support the conclusion, and this concern leads to the standards used to assess this argument.

First, the more examples, the stronger the argument. Second, the more relevant the examples, the stronger the argument. Using the Illuminati example as an illustration, the question would be whether the examples provide evidence of the Illuminati. As should be suspected, this is where the main dispute would occur. The person arguing that the Illuminati is real would be seen by their critics as seeing things that are not there, while the proponent of the theory would think their critics blind.

Third, the examples must be specific and clearly identified. Vague and unidentified examples provide little. Conspiracy theories are often supported by vague and unidentified examples, but sometimes they are precise and clearly identified. For example, the Illuminati theorist might point to a detailed and documented account of UN activities they see as example of the Illuminati influence.

 Fourth, counterexamples must be considered. A counterexample is an example that counts against the claim. One way to look at a counter example is that it is an example that supports the denial of the conclusion being argued for. The more counterexamples and the more relevant they are, the weaker the argument. In the case of the Illuminati example, counterexamples could be cases that would tell against Illuminati control. One common failing of conspiracy theories is that counterexamples are ignored or downplayed. As would be imagined, this can lead to a battle over whether the supposed counterexample is really a counterexample. For example, if someone claims the world is ruled by a super-competent Illuminati, counterexamples would include all the things that arise from poor decision making and ignorance. But, of course, a clever theorist can try to explain away these supposed counterexamples. For example, chaos, wars and economic disasters are not evidence against a global Illuminati, but proof it exists because they are brilliantly causing people to make bad decisions that cause wars and economic disasters.

As such, conspiracy theorists who use the argument by example are not being irrational or illogical. They are using a basic inductive tool. The problem is with how they assess their examples and their failure to give due weight to counterexamples. That said, the battles over the relevance of examples, whether a counterexample is really a counterexample, and the weight given to examples is one that can become very complicated. As such, theorists who are willing to apply the standards and consider criticism should not be simply dismissed. As with the other types of reasoning, conspiracy theorists are using good tools badly

This essay continues the discussion of the logic of conspiracy theories. Conspiracy theorists use the same logical tools as everyone else, but they use them in different ways. In the previous essay I discussed how conspiracy theorists use the argument from authority.  I will now look at the analogical argument.

In an analogical argument you conclude that two things are alike in a certain respect because they are alike in other respects. An analogical argument usually has three premises and a conclusion. The first two premises establish the analogy by showing that the things (X and Y) being compared are similar in certain respects (properties P, Q, R, etc.).  The third premise establishes that X has an additional property, Z. The conclusion asserts that Y has property Z as well. The form looks like this:

 

           Premise 1: X has properties P,Q, and R.

           Premise 2: Y has properties P,Q, and R.

           Premise 3: X has property Z.

           Conclusion: Y has property Z.

 

While one might wonder how reasoning by analogy could lead to accepting a conspiracy theory, it works very well in this role. If property Z is a feature of a conspiracy theory, such as the government harming citizens, then all that is needed to make the argument is something else with that property. Then it is easy to draw the analogy. 

For example, consider an anti-vaxxer who thinks there is a conspiracy to convince people that the unsafe vaccines are safe. They could make an analogical argument comparing vaccines to what happened during the opioid epidemic. This epidemic was caused by pharmaceutical companies lying about the danger of opioids, doctors being bribed to prescribe them, pharmacies going along with the prescriptions, and the state allowing it all to happen. Looked at this way, concluding that what was true of opioids is also true of vaccines can seem reasonable. Yet, the conspiracy theory about vaccines is mistaken. So, how does one sort assess this reasoning and what mistakes do conspiracy theorists make? The answer is that there are three standards for assessing the analogical argument and conspiracy theorists don’t apply them correctly.

First, the more properties X and Y have in common, the better the argument. The more two things are alike in other ways, the more likely it is that they will be alike in a specific way. In the case of vaccines and opioids, there are many shared similarities; for example, both involve companies, doctors, pharmacies and the state.

Second, the more relevant the shared properties are to property Z, the stronger the argument. A specific property, for example P, is relevant to property Z if the presence or absence of P affects the likelihood that Z will be present.  Third, it must be determined whether X and Y have relevant dissimilarities as well as similarities. The more dissimilarities and the more relevant they are, the weaker the argument.

In the case of inferring an unreal conspiracy to sell dangerous vaccines from the very real opioid conspiracy one must weigh the similarities and differences. While there are clearly relevant similarities, there are some crucial differences. Most importantly, vaccines have been extensively tested and are known to be safe. In contrast, all the scientific evidence supports common sense: opioids are addictive and potentially dangerous. While people want to make money off both, this does not entail that vaccines are not safe, even though opioids are dangerous.  While the analogy between the opioid conspiracy and the vaccine conspiracy breaks down; there is nothing wrong with reasoning by analogy. If the standards are applied and relevant differences are considered, this method of reasoning is quite useful.

It is rational for conspiracy theorists to consider real cases of wrongdoing. For example, we know that governments do engage in false flag operations or lie to “justify” wars and violence. But this fact does not prove, by itself, that any specific event is a false flag or a lie. As such, the mistake make by conspiracy theorists is not arguing by analogy, but in not being careful enough in applying the standards. So they commit the fallacy of false analogy.

While details of each conspiracy theory vary, they often attribute great power and influence to a small group engaging in nefarious activities. A classic example is the idea that NASA faked the moon landings. There are also numerous “false flag” conspiracy theories ranging from the idea that the Bush administration was behind 9/11 to the idea that school shootings are faked by anti-gun Democrats. There are also various medical conspiracy theories, such as those fueling the anti-vaccination movement.

There has been considerable research into why people believe in conspiracy theories. A plausible explanation is that anxiety and feeling a loss of control lead to accepting such theories. Ironically, people who embrace conspiracy theories seem less inclined to act against the conspiracy, perhaps because they feel helpless in the face of such imagined power. But there are some exceptions, such as when the conspiracy theory about Hillary Clinton running a slavery operation in a pizzeria led to a concerned citizen shooting up the place.

It is tempting to embrace a stereotype of the conspiracy theorist: someone immune to logic, oblivious to opposing evidence and perhaps suffering from mental illness. To broadly dismiss conspiracy theorists using this stereotype would be an error, though it does apply in some cases. Interestingly, some conspiracy theorists use  the same tools of logic and reasoning employed by critical thinkers and I will endeavor to illustrate this in a series of essays.

Since the world is a complicated place and is beyond the understanding of any one person, we often turn to experts. For example, most of us lack the time and resources to investigate immigration, so we must rely on experts. Accepting such claims based on the (alleged) expertise of the person making the claim is to use an argument from authority. This argument has the following form:

 

Premise 1: Person A is (claimed to be) an authority on subject S.

Premise 2: Person A makes claim C about subject S.

Conclusion: Therefore, C is true.

 

This reasoning is inductive (the premises provide a degree of support for the conclusion that is less than complete) and its strength depends on the quality of the authority making the claim. If the authority is qualified to make reliable claims in the subject area, then the argument would be a good one. For example, believing that this is what an argument from authority is because of my expertise as a philosophy professor who has taught critical thinking since 1989 would be good reasoning.  If the alleged authority is not qualified to make reliable claims in the subject area, then the argument would be a fallacious appeal to authority because the premises would not adequately support the conclusion. For example, if you believed what I said about quantum theory because of my alleged expertise, then you would fall victim to this fallacy because my expertise in philosophy does not confer expertise in quantum theory.

Most people who rationally believe any theory believe it based on an argument from authority, the exceptions are those who are the experts. For example, most of us believe in the theory of relativity because of Einstein, not because we have done scientific research.  In the case of conspiracy theories, believers often use an argument from authority: they believe the theory because an (alleged) expert told them it is true. For example, those who accept the anti-vaccination theory often refer to the debunked paper claiming a causal link between vaccines and autism or they believe because a celebrity tells them vaccines are dangerous. As such, for almost everyone the reasoned belief in a theory is the result of an argument from authority. So, then, what is the difference between the conspiracy theorist who believes that vaccines are dangerous because of what a celebrity says and a person who accepts relativity because of what Einstein said?

The difference, in general, is that conspiracy theorists fall for fallacious arguments from authority as opposed to accepting good arguments from authority. For example, believing that vaccines cause autism because of a debunked paper or because of what an actor says would be to fall for this fallacy. After all, unless the actor is also a medical expert on vaccines what they say about vaccines has no logical weight.

Resisting fallacious arguments from authority can be challenging, especially when the alleged authority is appealing, or the view being presented is what one wants to believe. However, there are standards by which to assess an argument from authority. To be a good argument, it must be such that:

 

  1. The person has sufficient expertise in the subject matter in question.
  2. The claim being made by the person is within her area(s) of expertise.
  3. There is an adequate degree of agreement among the other experts in the subject in question.
  4. The person in question is not significantly biased.
  5. The area of expertise is a legitimate area or discipline.
  6. The authority in question must be identified.

 

If all these conditions are met, then the conclusion is probably true. However, since the argument from authority is inductive it suffers from the classic problem of induction: even if all the premises are true, the conclusion could still turn out to be false. So, conspiracy theorists who believe “experts” are using the same argument as good reasoners, they are just using the fallacious version.

The question “why lie if the truth would suffice” can be interpreted in at least three ways. One is as an inquiry about the motivation and asks for an explanation. A second is as an inquiry about weighing the advantages and disadvantages of lying. The third way is as a rhetorical question that states, under the guise of inquiry, that one should not lie if the truth would suffice.

Since a general discussion of this question would be rather abstract, I will focus on a specific example and use it as the basis for the discussion. Readers should, of course, construct their own examples using their favorite lie from those they disagree with. I will use Trump’s response to the Democrats’ Green New Deal as my example. While this is something of a flashback to his first term, Trump recently signed an executive order targeting the old Green New Deal.

In 2019 the Democrats proposed a Green New Deal aimed at addressing climate change and economic issues. As with any proposal, rational criticisms can be raised against it. In his first term, Trump claimed the Democrats intend “to permanently eliminate all Planes, Cars, Cows, Oil, Gas & the Military – even if no other country would do the same.”  While there are some Democrats who would do these things, the Democratic Party favors none of that. Looked at rationally, it seems to make no sense to lie about the Green New Deal. If it is bad enough to reject on its own defects, lies would not be needed. If one must lie to attack it, this suggests a lack of arguments against it. To use an analogy, if a prosecutor lies to convict a person, this suggests they have no case—otherwise they would rely on evidence. So, why would Trump lie if the truth would suffice to show the Green New Deal is a terrible plan?

The question of why Trump (or anyone else) lies when the truth would suffice is a matter for psychology, not philosophy. So, I will leave that question to others. This leaves me with the question about the advantages and disadvantages of lying along with the rhetorical question.

The lie about the Green New Deal is a good example of hyperbole and a straw man. Trump himself claims to use the tactic of “truthful hyperbole”. Hyperbole is a rhetorical device in which one makes use of extravagant overstatement, such as claiming that the Democrats plan to eliminate cows. The reason hyperbole is not just called lying is because it is a specific type of untruth and must have a foundation in truth. Hyperbole involves inflating or exaggerating something true rather than a complete fiction. The Green New Deal is aimed at making America carbon neutral and this would impact cars, cows, planes, oil, gas and the military. The extravagant exaggeration is that the proposal would eliminate all of them permanently. This would be as if someone proposed cutting back on dessert at family dinners and they were accused of wanting to eliminate meals permanently. Since hyperbole is rhetoric without logic, it has no logical force and does not prove (or disprove) anything. But it can have considerable psychological force in influencing people to believe a claim.

Hyperbole is often used in conjunction with the Straw Man fallacy. This fallacy is committed when a person’s actual position is ignored and a distorted, exaggerated or misrepresented version of that position is criticized in its place. This sort of “reasoning” has the following pattern:

 

Premise 1: Person A has position X.

Premise 2: Person B presents position Y (a distorted version of X).

Premise 3: Person B attacks position Y.

Conclusion: Therefore, X is false or bad.

 

This sort of “reasoning” is fallacious because attacking a distorted version of a position is not a criticism of the actual position. One might as well expect an attack on a poor drawing of a person to hurt the person.

Like hyperbole, the Straw Man fallacy is not based on a simple lie: it involves an exaggeration or distortion of something true. In the case of Trump and the Green New Deal, his “reasoning” is that the Green New Deal should be rejected because his hyperbolic straw man version of it is terrible. Since this is a fallacy, his “reasons” do not support his claim. It is, as always, important to note that Trump could be right about the Green New Deal being a bad idea, but not for the “reasons” he gives. To infer that a fallacy must have a false conclusion is itself a fallacy (the fallacy fallacy).

While hyperbole has no logical force and a straw man is a fallacy, there are advantages to using them. One advantage is that they are much easier than coming up with good reasons. Criticizing the Green New Deal for what it is requires knowing what it is and considering possible defects which take time and effort. Tweeting out a straw man takes seconds.

The second advantage is that hyperbole and straw men often work, often much better than the truth. In the case of complex matters, people rarely do their homework and do not know that a straw man is a straw man. I have interacted with people who honestly think Democrats plan to eliminate planes and cars. Since this is a bad idea, they reject it, not realizing that is not the Green New Deal. An obvious defense against hyperbole and straw man is to know the truth. While this can take time and effort, someone who has the time to post on Facebook or Twitter, has the time to do basic fact checking. If not, their ignorance should command them to remain silent, though they have the right to express their unsupported views.

As far as working better than the truth, hyperbole or straw man appeals to the target’s fears, anger or hope. They are thus motivated to believe in ways that truth cannot match. People generally find rational argumentation dull and unmoving, especially about complicated issues.  If Trump honestly presented real problems with the Green New Deal, complete with supporting data and graphs, he would bore most and lose his audience. By using a straw man, he better achieves his goal. This does allow for a pragmatic argument for lying because the truth will not suffice.

If telling the truth would not suffice to convince people, then there is the pragmatic argument that if lying would do the job, then it should be used. For example, if going into an honest assessment of the  Green New Deal would bore people and lying would get the job done, then Trump should lie if he wants to achieve his goal. This does, however, raise  moral concerns.

If the reason the truth would not suffice is because it does not logically support the claim, then it would be immoral to lie. To use a non-political example, if you would not invest in my new fake company iScam if you knew it was a scam, getting you to invest in it by lying would be wrong. So, if the New Green Deal would not be refuted by the truth, Trump’s lies about it would be immoral.

But, what about cases in which the truth would logically support a claim, but the truth would not persuade people to accept that claim? Going back to the Green New Deal example, suppose it is terrible but explaining its defects would bore people and they would remain unpersuaded. But a straw man version of the Green New Deal would persuade many people to reject this hypothetically terrible plan? From a utilitarian standpoint, the lie could be morally justified; if the good of lying outweighed the harm, then it would be the right thing to do. To use an analogy, suppose you were trying to convince a friend to not start a dangerous diet. You have scientific data and good arguments, but you know your friend is bored by data and is largely immune to logic. So, telling them the truth would mean that they would go on the diet and harm themself. But, if you exaggerate the harm dramatically, your friend will abandon the diet. In such a case, the straw man argument would seem to be morally justified as you are using it to protect your friend.

While this might seem to justify the general use of hyperbole and the straw man, it only justifies their use when the truth does suffice logically but does not suffice in terms of persuasive power. That is, the fallacy is only justified as a persuasive device when there are non-fallacious arguments that would properly establish the same conclusion.

Reasoning is like chainsaw: useful when used properly, but when used badly it can create a bloody mess. While this analogy can be applied broadly to logic, this essay focuses on the inductive generalization and how it can become a wayward chainsaw under the influence of fear. I’ll begin by looking at our good friend the inductive generalization.

Consisting of a premise and a conclusion, the inductive generalization is a simple argument:

 

Premise 1: P% of observed Xs are Ys.

Conclusion: P% of all Xs are Ys.

 

The quality of an inductive generalization depends on the quality of the first premise, which is usually called the sample. The larger and more representative the sample, the stronger the argument (the more likely it is that the conclusion will be true if the premise is true). There are two main ways in which an inductive generalization can be flawed. The first is when the sample is too small to adequately support the conclusion. For example, a person might have a run-in with a single bad driver from Ohio and conclude all Ohio drivers are terrible. This is known as the fallacy of hasty generalization.

The second is when there is a biased sample, one that does not represent the target population. For example, concluding that most people are Christians because everyone at a Christian church is a Christian would be a fallacy. This is known as the fallacy of biased generalization.

While these two fallacies are well known, it is worth considering them in the context of fear: the fearful generalization. On the one hand, it is not new: a fearful generalization is a hasty generalization or a biased generalization. On the other hand, the hallmark of a fearful generalization (that is fueled by fear) makes it worth considering, especially since addressing the fueling fear seems to be key to disarming this sort of poor reasoning.

While a fearful generalization is not a new fallacy structurally, it is committed because of the psychological impact of fear. In the case of a hasty fearful generalization, the error is drawing an inference from a sample that is too small, due to fear. For example, a female college student who hears about incidents of sexual harassment on campuses might, from fear, infer that most male students are likely to harass her. As another example, a person who hears about an undocumented migrant who commits a murder might, from fear, infer that many  undocumented migrants are murderers. Psychologically (rather than logically), fear fills out the sample, making it feel like the conclusion is true and adequately supported. However, this is an error in reasoning.

The biased fearful generalization occurs when the inference is based on a sample that is not representative, but this is overlooked due to fear. Psychologically, fear makes the sample feel representative enough to support the conclusion. For example, a person might look at arrest data about migrants and infer that most migrants are guilty of crimes. A strong generalization about what percentage of migrants commits crimes needs to include the entire population, not a sample consisting just of those arrested.

As another example, if someone terrified of guns looks at crime data about arrests involving firearms and infers that most gun owners are criminals, this would be a biased generalization. This is because those arrested for gun crimes do not represent the entire gun-owning population. A good generalization about what percentage of gun-owners commit crimes needs to include the general population, not just those arrested.

When considering any fallacy, there are three things to keep in mind. First, not everything that looks like a fallacy is a fallacy. After all, a good generalization has the same structure as a hasty or biased generalization. Second, concluding a fallacy must have a false conclusion is a fallacy (the fallacy fallacy). So, a biased or hasty generalization could have a true conclusion; but it would not be supported by the generalization. Third, a true conclusion does not mean that a fallacy is not a fallacy. For example, a hasty generalization could have a true conclusion—the problem lies in the logic, not the truth of the conclusion. For example, if I see one red squirrel in a forest and infer all the squirrels there are red, then I have made a hasty generalization, even if I turn out to be right. The truth of the conclusion does not mean that I was reasoning well. It is like a lucky guess on a math problem: getting the right answer does not mean that I did the math properly. But how does one neutralize the fearful generalization?

On the face of it, a fearful generalization would seem to be easy to neutralize. Just present the argument and consider the size and representativeness of the sample in an objective manner. The problem is that a fearful generalization is motivated by fear and fear impedes rationality and objectivity. Even if a fearful person tries to consider the matter, they might persist in their errors. To use an analogy, I have an irrational fear of flying. While I know that air travel is the safest form of travel this has no effect on my fear. Likewise, someone who is afraid of migrants or men might be able to do the math yet persist in their fearful conclusion. As such, a way of dealing with fearful generalizations would be the best way to deal with fear in general, but this goes beyond the realm of critical thinking and into the realm of virtue.

One way to try to at least briefly defuse the impact of fear is to try the method of substitution. The idea is to replace the group one fears with a group that one belongs too, likes or at least does not fear. This works best when the first premise remains true when the swap is made, otherwise the person can obviously reject the swap. This might have some small impact on the emotional level that will help a person work through the fear—assuming they want to. I will illustrate the process using Chad, a hypothetical Christian white male gun owner who is fearful of undocumented migrants (or illegals, if you prefer).

Imagine that Chad reasons like this:

 

Premise 1: Some migrants have committed violent crimes in America.

“Premise” 2: I (Chad) am afraid of migrants.

Conclusion: Many migrants are violent criminals.

 

As “critical thinking therapy” Chad could try swapping in one of his groups and see if his logic still holds.

 

Premise 1: Some white men have committed violent crimes in America.

“Premise” 2: I (Chad) am a white man.

Conclusion: Many white men are violent criminals.

 

Chad would agree that each argument starts with a true first premise, but Chad would presumably reject the conclusion of the second argument. If pressed on why this is the case, Chad would presumably point out that the statistical data does not support the conclusion. At this point, a rational Chad would realize that the same applies to the first argument as well. If this does not work, one could keep swapping in groups that Chad belongs to or likes until Chad is able to see the bias caused by his fear or one gets exhausted by Chad.

This method is not guaranteed to work (it probably will not), but it does provide a useful method for those who want to check their fears. Self-application involves the same basic process: swapping in your groups or groups you like in place of what you fear to see if your reasoning is good or bad.

As noted in the previous essay, perhaps conservatives have good reasons to not want to be professors or professors have good reasons not to be conservatives. In this essay, I will offer some possible DEI solutions to the dearth of conservatives in higher education.

If highly educated conservatives find academics unattractive because of the lower salaries, then there are two ways to motivate them into becoming professors. One is to argue that capable conservatives should “take one for the team” and become professors. While this would be a financial loss for conservative professors, their sacrifices would benefit the community of conservatives. The challenge is persuading those who see self-interest as a core value to act in a way seemingly contrary to their self-interest.

Another approach, which would probably be more appealing, is for conservatives to offer financial support and rewards for conservatives who become and remain professors. This is already done in some cases, but expanding the support and rewards would help increase the number of conservative professors. One challenge is to ensure that the support and rewards go to actual conservatives. They would need to police ideological purity to keep out clever liberals (or even secret Marxists) who might exploit these opportunities for their own profit. And we would certainly not want anyone profiting from pretending to believe something.

A possible downside to this approach is that these recruited professors could be accused of bias because they are being paid to be conservative professors. will leave a solution to this problem to any conservatives who might be troubled by it.

A practical worry about supporting conservative students so that they become conservative professors is that their experiences in graduate school and as faculty might turn them away from conservatism. For example, they might start taking rhetorical attacks on experts and science personally as they become experts and scientists. As another example, they might find the hostility of Republicans to higher education a problem as they try to work in a field being attacked so vehemently by their fellows. But what about getting professors to want to be conservative? How could this be done?

One option for conservatives is to change their anti-expert and anti-science rhetoric. Rather than engaging in broad attacks on experts or science, they could confine their attacks to specific targets. Those not being directly attacked might find conservatism more appealing. The Republican party could also change its hostile attitude towards higher education towards a more positive approach. They could, for example, return to providing solid funding for research and education. If professors believed that Republicans would act in their interest and in the interest of their students, they would be more inclined to support them. Conservative faculty would probably also be more likely to stay conservative.

Taking such steps would, however, be a problem for the Republican party. After all, the anti-science stance towards climate change and their broad anti-expert stance have yielded great political success. Changing these views would come at a price. Providing support for public higher education would also put Republicans at odds with their views about what should be cut while giving tax breaks for the rich. It would also go against their strategy of monetizing higher education. As such, Republicans would need to weigh the cost of winning over professors against the advantages they gain by the policies that alienate professors.

Oddly enough, some people claim that it is the Democrats and liberals who are more anti-science and anti-intellectual than the Republicans. If this were true, then the Republicans are doing a terrible job of convincing scientists and intellectuals to support them. If they could convince professors that they are the real supporters of the sciences and the Democrats are the real threat, then they should be able to win converts in the academy. The challenge is, of course, proving this claim and getting professors to accept this proof. But this seems unlikely, given that the claim that Republicans are pro-science is absurd on the face of it.