During the COVID-19 pandemic some public figures and social media users attempted to downplay the danger of COVID-19 by comparing the number of deaths caused by the virus to other causes of deaths.  For example, a common example noted that 21,297 people died from 1/2/202 to 3/25/200 from COVID-19 but that 113,000 people died from the flu during the same period.

Downplaying is a rhetorical technique used to make something seem less important or serious. These comparisons seemed aimed at dismissing claims made by experts that the virus was a serious threat. The comparisons were also often used to persuade people that the response was excessive and unnecessary. While comparing causes of death is useful when judging how to use resources and accurately assess threats, the death comparisons must be done with a critical eye. That was true in the last pandemic, and it will be true in the next one.

Before even considering the comparison between pandemic deaths and other causes of death, it is important to determine the accuracy of the numbers. If the numbers are exaggerated, downplayed or otherwise inaccurate, then this undermines the comparison. Even if the numbers are accurate, the comparison must be critically assessed. The methods I will discuss are those I use in my Critical Inquiry class and are drawn from Moore and Parker’s Critical Thinking text. When an important comparison is made, you should ask four questions:

 

  1. Is important information missing?
  2. Is the same standard of comparison being used? Are the same reporting and recording practices being used?
  3. Are the items comparable?
  4. Is the comparison expressed as an average?

 

While question 4 does not apply, the other three do. One important piece of missing information in such comparisons is that while the other causes of death tend to be stable over time, the deaths caused by COVID-19 grew exponentially. On March 1 the WHO reported 53 deaths that day. 862 deaths were reported on March 16. On March 30 there were 3215 new deaths. On April 8 the United States alone  had 1,997 deaths and 14.390 people are believed to have died in the United States since the start of the pandemic. The death toll kept rising. In contrast, while seasonal flu deaths fluctuate, they do not grow in this exponential manner. As such, the comparison is flawed. We can expect similar comparisons to be made in the next pandemic and should be on guard against erroneous comparisons of this sort.

Another flaw in the comparison is that the flu and many other causes of death are well established. The COVID-19 virus was still spreading when the comparison was made. It would be like comparing a fire that just started with a fire that has been steadily burning and confidently claiming that the new fire would not be as bad as the old fire.

Death numbers are also most likely an estimate from past yearly death tolls. What the numbers reflect is the number of people who probably died of those causes during a few months based on data from previous years. 

While the death toll from COVID-19 was high, COVID-19 deaths were also likely to be underreported. Since testing was limited for quite some time, some people who died from the virus did not have their cause of death properly reported. Even in the early days of the death comparison, the deaths caused by COVID-19 were most likely higher than reported. This leads to two problems with the comparison. One is that if the other causes of death are accurately reported and COVID-19 deaths were not, then the comparison is flawed. The second is that COVID-19 deaths might have been recorded as being caused by something else (such as the flu/pneumonia) and this would also make the comparison less accurate by “increasing” the number of deaths by other causes. 

While the comparison to other causes of death might have seemed persuasive early in the pandemic, the exponential increase in deaths is like to have robbed the comparison of its persuasive power. In mid-April, COVID-19 was killing more Americans per week than automobile accidents, cancer, heart disease and the flu/pneumonia did in 2018.  Somewhat ironically, a comparison of COVID-19 deaths ended up showing the reverse of what the comparison was originally intended to do.

We can expect similar death comparisons in the early days of the next pandemic. While these comparisons can have merit, they are often used as rhetorical devices to downplay the seriousness of a pandemic. As such, we should be on guard against this tactic during the next pandemic.

In the last essay I looked at the inductive generalization and its usefulness in reasoning about pandemics and ended by mentioning that there are various fallacies that can occur when generalizing. The most common are hasty generalization, appeal to anecdotal evidence, and biased generalization. I will look at each of them in terms of pandemics.

A hasty generalization occurs when a person draws a conclusion about a population based on a sample that is not large enough to adequately support the concussion. It has the following form:

 

Premise 1: Sample S (which is too small) is taken from population P.

Premise 2: In Sample S X% of the observed A’s are B’s.

Conclusion: X% of all A’s are B’s in Population P.

 

In the previous essay I presented a rough guide to sample size,  margin of error and confidence level. In that context, this fallacy occurs when the sample is not large enough to warrant confidence in the conclusion. In the case of a pandemic, one important generalization involves sorting out the lethality of the pathogen. Math for this is easy but the challenge is getting the right information.

During the COVID-19 pandemic, there were large samples of infected people.  As such, inferences from these large samples to the lethality of the virus would not be a hasty generalization. But avoiding this fallacy does not mean that the generalization is a good one as there are other things that can go wrong.

There were also inferences drawn from relatively small samples, such as generalizations from treatments undergoing testing. For example, the initial samples of people treated with hydroxychloroquine for COVID-19 were small, so generalizing from those samples risked committing a hasty generalization. This isn’t to say that small samples are always useless but due care must be taken when generalizing from them.

As a practical guide, when you hear claims about a pandemic (or anything) based on generalizations you need to consider whether the conclusion is supported by an adequately sized sample. While having a sample that is too small does not entail that the conclusion is false (that inference would be fallacious) but the conclusion would not be adequately supported. While a small sample provides weak logical support, an anecdote can have considerable psychological force which leads to a fallacy similar to the hasty generalization.

An appeal to anecdotal evidence is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or very few cases. The fallacy is also committed when someone rejects reasonable statistical data supporting a claim in favor of a single example or small number of examples that go against the claim. There are two forms for this fallacy:

 

Form One

Premise 1:  Anecdote A is told about a member M (or small number of members) of Population P.

Premise 2: Anecdote A says that M is (or is not) C.

Conclusion: Therefore, C is (or is not) true of Population P.

 

Form Two

Premise 1:  Good statistical evidence exists for general claim C.

Premise 2: Anecdote A is is an exception to or goes against general claim C.

Conclusion: C is false.

 

This fallacy is like hasty generalization in that an inference is drawn from a sample too small to adequately support the conclusion. One difference between hasty generalization and anecdotal evidence is that the fallacy of anecdotal evidence involves using a story (anecdote) as the sample. Out in the wild it can be difficult to distinguish as hasty generalization from anecdotal evidence. Fortunately, what is most important is recognizing that a fallacy is occurring. A much clearer difference is that the paradigm form of anecdotal evidence involves rejecting statistical data in favor of the anecdote.

People often fall victim to this fallacy because anecdotes usually have much more psychological force than statistical data. Wanting an anecdote to be true also fuels this fallacy. During the COVID-19 pandemic, there were many anecdotes about alleged means of curing or preventing or curing the disease and the same will happen during the next pandemic. Even if the anecdotes are not lies, they do not provide an adequate basis for drawing conclusions about the general population. This is because the sample is not large enough to warrant the conclusion. As a concrete example, while there were some early positive anecdotes about hydroxychloroquine. Then wishful thinking and Trump’s claims caused some people to accept the anecdotes as adequate evidence, but this was bad reasoning.

Appeals to anecdotal evidence often occur in the context of causal reasoning, such as the case of hydroxychloroquine, and this adds additional complexities.

As with any fallacy, it does not follow that the conclusion of an appeal to anecdotal evidence is false. The error is accepting the conclusion based on inadequate evidence, not in making a false claim. It is also worth noting that anecdotal evidence can be useful for possible additional investigation but is not enough to prove a general claim.

As noted earlier, there were large samples of infected people that allowed generalizations to be drawn without committing the fallacy of hasty generalization. But even large samples can be problematic. This is because samples need to be both large enough and representative enough. This takes us to the fallacy of biased generalization.

This fallacy is committed when a person draws a conclusion about a population based on a sample that is biased to a degree or in a way that prevents it from adequately supporting the conclusion.

 

Premise 1: Sample S (which is too biased) is taken from population P.

Premise 2: In Sample S X% of the observed A’s are B’s.

Conclusion: X% of all A’s are B’s in Population P.

 

The problem with a biased sample is that it does not represent the population adequately and so does not adequately support the conclusion. This is because a biased sample can differ in relevant ways from the population that affects the percentages of A’s that are B’s.

In the case of COVID-19 there was a serious problem with biased samples, although the situation improved over time. I will focus on an inductive generalization about the lethality of the virus.

The math for calculating lethality is easy but the main challenge is sorting out how many people are infected.   Since the start of the pandemic, the United States had a self-inflicted shortage of test kits. Because of this, many of the available tests were being used on people showing symptoms or who were exposed to those known to be infected. This sample was sample was large but biased: it contained a disproportionate number of people who were already showing symptoms and missed many people who were infected but asymptomatic.

If we face a similar situation in the next pandemic, the sample will probably have a lethality rate higher than the real lethality rate. To use a simple fictional example: imagine a population of 1000 people and 200 of them are infected with a virus. Of the 200 people infected, 20 show symptoms and only they are tested.  Of the 20 people tested, 2 die. This sample would show a mortality rate of 10%. But the actual mortality rate would be 2 in 200 which is 1 %.  This would still be bad, but not as bad as the biased sample would indicate. This shows one of the many reasons why broad testing is important: it is critical to establish an accurate lethality rate. An accurate lethality rate is essential to making rational decisions about our response to any pandemic.

As a final point, it is also important to remember that the lethality varies between groups in the overall population—we know this based on the death data. But to determine the lethality for each group, the samples used for the calculation must be representative of the population. While overall lethality is important, making rational decision making also requires knowing the lethality for various groups. For example, pathogens tend to be more lethal for seniors and they would need more protection in the next pandemic.

As always, stay safe and I will see you in the future.

During a pandemic, like that of COVID-19, it might be wondered how the number of cases is determined and how the lethality of a disease is determined. Some might be concerned or skeptical because the numbers often change over time and they usually vary across countries, age groups, ethnicities and economic classes. This essay provides a basic overview of a core method of making inferences from samples to entire populations, what philosophers call the inductive generalization.

An inductive generalization is an inductive argument. In philosophy, an argument consists of premises and one conclusion. The premises are the reasons or evidence being offered to support the conclusion, which is the claim being argued for.  Philosophers often divide arguments into inductive and deductive. In philosophy a deductive argument is such that the premises provide (or are supposed to provide) complete support for the conclusion. An inductive argument is an argument such that the premises provide (or are supposed to provide) some degree of support (but less than complete support) for the conclusion.  If the premises of an inductive argument support the conclusion adequately (or better) it is a strong argument. It is such that if the premises are true, the conclusion is likely to be true. If a strong inductive argument has all true premises, it is sometimes referred to as being cogent.

One feature of inductive logic is that a strong inductive argument can have a false conclusion even when all the premises are true. This is because of what is known as the inductive leap: the conclusion always goes beyond the premises. This can also be put in terms of drawing a conclusion from what has been observed to what has not been observed. The now dead David Hume argued back in the 1700s that this meant we could never be sure about inductive reasoning and later philosophers called this the problem of induction. In practical terms, this means that even if we use perfect inductive reasoning using premises that are certain, our conclusion can still be false. But induction is often the only option, and we use it because we must. So, when the initial numbers about COVID-19 turned out to be wrong, this is exactly what we should expect. The same must be expected in the next pandemic.

What, then, is an inductive generalization? Roughly put, it is an argument in which a conclusion about an entire population is based on evidence from a sample of observed members of that population. The formal version looks like this:

                                    Premise 1: P% of observed Xs are Ys.

                                    Conclusion: P% of all Xs are Ys.

 

The observed Xs would be the sample and all the Xs would be the target population. As an example, if someone wanted to know the mortality rate during a pandemic for males over sixty, the target population would be all males over sixty.

While the argument is simple, sorting out when a generalization is strong can be challenging. Without getting into the statistics and methods for doing rigorous generalizations, I will go over the basic method of assessment—so you can make some sense when experts talk about such matters during the next pandemic.

There will be various factors whose presence or absence in the sample can affect the presence or absence of the property the argument is concerned with, so a representative sample will have those factors in proportion to the target population. For example, if we wanted to determine the infection rate for all people, then we would need to try to ensure that our sample included all factors affecting the infection rate and our sample would need to mirror our target population in terms of age, ethnicity, base health, and all other relevant features. Sorting out what factors are relevant can be challenging, especially as a pandemic is unfolding. To the degree that the sample mirrors the target population properly, it would be representative.

A sample is biased relative to a factor to the extent that the factor is not present in the sample in the same proportion as in the population. This sort of sample bias was a problem when trying to generalize about COVID-19. One example of this was trying to draw a conclusion about the lethality of COVID-19. While the math to do this is easy (a simple calculation of the percentage of the infected who die from it) getting the numbers right is hard because we needed to know how people were infected and how many died from it.

Experts tried to determine the number of people infected by testing and modeling, which are also inductive reasoning. In the United States, most of the testing was of people showing symptoms, and this created a biased sample—to get an unbiased sample, even those without symptoms should be tested.  There was also the practical matter of the accuracy of the tests and the determination of the cause of death. This will be true in the next pandemic as well.

To use a concrete, but made up example, if 5% of those who tested positive for COVID-19 ended up dying, the generalization from that sample to the whole population would only be as strong as the representativeness of the sample. If only sick people were tested, the sample will not be representative and the conclusion about the lethality of the virus will (probably) be wrong.

There is also the challenge of sorting out the effect of the virus on different populations. While there is an overall infection rate and lethality rate for the whole population, there are different infection rates and lethality rates for different groups within the human population. As an example, the elderly were more likely to die of COVID-19 than younger people.

In addition to representativeness, sample size is important; the larger, the better.  This brings us to two more concepts: Margin of error and confidence level. A margin of error is a range of percentage points within which the conclusion of inductive generalization falls; this number is usually presented in terms of being plus or minus. The margin of error depends on sample size and the confidence level of the argument. The confidence level is typically presented as a number and represents the percentage of arguments like the one in question that have a true conclusion.

When generalizing about large (1o,000+) populations, a sample will need to have 1,000+ individuals to be representative (assuming the sample is taken properly). This table, from Moore & Parker’s Critical Thinking text, shows the connection between sample size and error margin (confidence level of 95%:

 

Sample Size

Error Margin (%)

Corresponding Range (percentage points)
10 +/- 30 60
25 +/- 22 44
50 +/- 14

28

100 +/- 10 20
250 +/- 06 12
500 +/- 04 8
1,000 +/- 03 6
1,500 +/- 02 4

 

The practical takeaway is that sample size is important: a small sample will have a large margin of error that can make it useless. For example, suppose that a group of 50 COVID-19 patients received hydroxychloroquine tablets and 10 of them recovered fully. Laying aside all causal reasoning (which would be a huge mistake) the best we could say is that 20% of patients treated with hydroxychloroquine +/-14% will recover fully. This is just a simple generalization and a controlled experiment or study would be needed to properly assess a causal claim.

There are various fallacies (mistakes in reasoning) that can occur with a generalization. I will discuss those in the next essay. Stay safe and I will see you in the future.

During a White House press briefing President Trump expressed interest in injecting disinfectants as a treatment for COVID-19. In response  medical experts and the manufacturers of Lysol warned the public against attempting this. Trump’s defenders adopted two main strategies. The first was to interpret Trump’s statements in a favorable way; the second was to assert they were “fact checking” the claim that Trump told people to inject disinfectant. Trump eventually claimed that he was being sarcastic to see what the reporters would do. From the standpoint of critical thinking, there is lot going on with rhetorical devices and fallacies. I will discuss how critical thinking can sort through this sort of situation, because the next pandemic is likely to see a repeat performance.

When interpreting or reconstructing claims and arguments, philosophers are supposed to apply the principle of charity. Following this principle requires interpreting claims in the best possible light and reconstructing arguments to make them as strong as possible. There are three reasons to follow the principle. The first is that doing so is ethical. The second is that doing so avoids committing the straw person fallacy, which I will talk more about in a bit. The third is that if I am going to criticize a person’s claims or arguments, criticism of the best and strongest versions also takes care of the lesser versions.

The principle of charity must be tempered by the principle of plausibility: claims must be interpreted, and arguments reconstructed in a way that matches what is known about the source and the context. For example, reading quantum physics into the works of our good dead friend Plato would violate this principle.

Getting back to injecting disinfectants, it is important to accurately present Trump’s statements in context and to avoid making a straw person. The Straw Person fallacy is committed when one ignores a person’s actual claim or argument and substitutes a distorted, exaggerated or misrepresented version of it. This sort of “reasoning” has the following pattern:

 

Premise 1: Person A makes claim or argument X.

Premise 2: Person B presents Y (which is a distorted version of X).

Premise 3: Person B attacks Y.

Conclusion: Therefore, X is false/incorrect/flawed.

 

This sort of “reasoning” is fallacious because attacking a distorted version of a claim or argument is not a criticism of the original. This fallacy often uses hyperbole, a rhetorical device in which one makes an exaggerated claim. A straw serson can be effective because people often do not know the real claim or argument being attacked. The fallacy is especially effective when the straw person matches the audience’s biases or stereotypes as they will feel that the distorted version is the real version.

While this fallacy is usually aimed at an audience, it can be self-inflicted: a person can unwittingly make a Straw Person out of a claim or argument. This can be done entirely in error (perhaps due to ignorance) or due to the influence of prejudices and biases.

The defense against a Straw Man, self-inflicted or not, is to get a person’s claim or argument right and to apply the principle of charity and the principle of plausibility.

Some of Trump’s defenders claimed Trump was the victim of a straw person attack; they set off on a journey of “fact checking” and asserted that Trump did not tell people to drink bleach. Somewhat ironically, they might have engaged in Straw Person attacks when attempting to defend Trump from alleged Straw Person attacks. Warning people to not drink bleach or inject disinfectants is not the same thing as claiming that Trump told people to do these things.

The truth is that Trump did not tell people to drink bleach. His exact words, from the official White House transcript, are: “And then I see the disinfectant, where it knocks it out in a minute. One minute. And is there a way we can do something like that, by injection inside or almost a cleaning. Because you see it gets in the lungs and it does a tremendous number on the lungs. So it would be interesting to check that. So, that, you’re going to have to use medical doctors with. But it sounds — it sounds interesting to me.”

Trump did not tell people to drink bleach or inject disinfectant. As such, the “Clorox Chewables” and similar memes were a form of visual Straw Person attack against Trump. But they can also be seen as the rhetorical device of mockery. To avoid committing the Straw Person fallacy, we need to use Trump’s actual statements and attacking him for advocating drinking bleach would be an error.

While Trump does not directly tell people inject disinfectants, he can be seen as engaging in a form of innuendo, a rhetorical technique in which something is suggested or implied without directly saying it. Anyone who understands the basics of how language and influence works would get that Trump’s remarks would cause some people to believe that this was something worth considering. There is evidence for this in the form of calls to New York City poison control centers and similar calls in Maryland and other states. A feature of innuendo is that it allows a person to deny they said what they implied or suggested, after all, they did not directly say it. Holding someone accountable requires having adequate evidence that they intended what their words implied or suggested. Doing this can be challenging since it requires insights into their character and motives. There is also a moral issue here about the responsibility of influential people to take care in what they say, something that goes beyond critical thinking and into ethics. But a president needs to be careful in what they say.

Trump used words indicating he thought medical doctors should test injecting disinfectants into peoples’ lungs as a possible treatment for COVID-19. Given Trump’s well-established record of dangerous ignorance, interpreting his words as meaning what they state does meet the conditions of the principle of charity and the principle of plausibility: these are his exact words, in context and with full consideration of the source.

Some of Trump’s defenders also tried to use what could be called the Steel Person fallacy. The Steel Person fallacy involves ignoring a person’s claim or argument and substituting a better one in its place.  This sort of “reasoning” has the following pattern:

 

Premise 1: Person A makes claim or argument X.

Premise 2: Person B presents Y (a better version of X).

Premise 3: Person B defends Y.

Conclusion: Therefore, X is true/correct/good.

 

This sort of “reasoning” is fallacious because presenting and defending a better version of a claim or argument does not show the original is good. A Steel Person can be effective because people often do not know the real claim or argument being defended. The fallacy is especially effective when the Steel Person matches the audience’s positive biases or stereotypes, they will feel the improved version is the real version and accept it. The difference between applying the principle of charity and committing a Steel Person fallacy lies in the intention: the principle of charity is aimed at being fair, the Steel Person fallacy is aimed at making a person’s claim or argument appear much better and so is an attempt at deceit.

While this fallacy is generally aimed at an audience, it can also be self-inflicted: a person can unwittingly make a Steel Person out of a claim or argument. This can be done in error (perhaps due to ignorance) or due to the influence of positive biases. The defense against a Steel Man, self-inflicted or not, is to take care to get a person’s claim or argument right and to apply the principle of plausibility.

In the case of Trump, he was clearly expressing interest in injecting disinfectants into the human body. Some of his defenders created a Steel Man version of his claims, contending that what he really was doing was presenting new information about using light, heat and disinfectant killing the virus. To conclude that Trump was right because of this better version would be an error in logic. While light, heat and disinfectant will destroy the virus, Trump’s claim was about injecting disinfectant into the human body—which, while not telling people to drink bleach, is a dangerously wrong claim.

Trump himself undermined these defenders by saying “I was asking a question sarcastically to reporters like you just to see what would happen.” If this is true, then his defenders’ claims that he was not talking about injecting disinfectant would be false. He cannot have been saying something dangerously crazy to troll the press and making a true and rational claim about cleaning surfaces.  Trump seems to have  used a rhetorical device popular with the right (although anyone can use it). This method could be called the “just kidding” technique and can be put in the meme terms “for the lulz.”

One version of the “just kidding” tactic occurs when a person says something that is racist, bigoted, sexist, or otherwise awful and does not get the positive response they expected. The person’s “defense” is that they did not really mean what they said, they were “just kidding.” As a rhetorical technique, it is an evasive maneuver designed to avoid accountability. The defense against this tactic is to assess whether the person was plausibly kidding. Did they intend to be funny without malicious intent and fail or are they trying to weasel out of accountability? This can be difficult to sort out, since you need to have some insight into the person’s motives, character and so on.

Another version of the “just kidding” tactic is similar to the “I meant to do that” tactic. When someone does something embarrassing or stupid, they will often try to reduce humiliation by claiming they intended to do it. In Trump’s injection case, he claimed he intended to say what he said and that he was being sarcastic. He meant to do it but was just kidding.

If Trump was just kidding, he thought it was a good idea to troll the media during a pandemic—which is a matter for ethics rather than critical thinking. If he was not kidding, then he was attempting to avoid accountability for his claim, which is the point of this tactic. The defense against this tactic is to assess whether the person was plausibly kidding, that is, did they really mean to do it and what they meant to do was just kidding? This requires having some insight into the person’s character and motives as well as considering the context. In the case of Trump, the video shows him addressing his remarks to the experts rather than the press and he seems completely serious. There is also the fact that a president engaged in a briefing on a pandemic should be serious rather than sarcastic. As such, he does not seem to be kidding. But Trump put himself in a dilemma of awfulness: he was either seriously suggesting a dangerous idea or trying to troll the press during a briefing on a pandemic that was killing thousands of Americans. Either way, that would be terrible. While this essay focuses on Trump and the COVID-19 pandemic; we should expect something similar, if not worse, should another pandemic arise during Trump’s reign.

During the COVID-19 pandemic, some politicians argued that America should be reopened because the dire predictions about COVID turned out to be wrong. On the face of it, this appears to be good reasoning that we could use in the next pandemic: if things are not as bad as predicted, we can start reopening sooner than predicted. To use an analogy, if a fire was predicted to destroy your house, but it only burned your garage, then it would make sense to move back in and rebuild the garage. While this reasoning is appealing, it also can be a trap. Here is how the trap works.

Some politicians and pundits pointed out that the dire predictions about COVID did not come true. For example, the governor of Florida said that since the hospitals were not overwhelmed as predicted, it was a good idea for them to return to profitable elective surgery. He also, like some other Republican governors, wanted to reopen very quickly. This reasoning seemed sensible: the pandemic was not as bad as predicted, so we can quickly reopen. There are also those who sneered at the dire predictions and were upset at what they saw as excessive precautions. This can also seem sensible: the experts predicted a terrible outcome for COVID-19, but they were wrong. We overreacted and should have rolled back the precautions when the predictions did not come true. But would this be a wise strategy for the next pandemic?

While it is reasonable to consider whether the precautions were excessive, there is a tempting fallacy that needs to be avoided. This is the prediction fallacy. It occurs when someone uncritically rejects a prediction and responses to that prediction when the outcome of a prediction turns out to be false. The error in the logic occurs because the person fails to consider what should be obvious: if a prediction is responded to effectively, then the prediction is going to be “wrong.” The form of the fallacy is this:

 

Premise 1: Prediction P predicted X if we do not do R.

Premise 2: Response R was taken based on prediction P.

Premise 3: X did not happen, so prediction P is wrong.

Conclusion: We should not have taken Response R (or no longer need to take Response R).

 

To use a concrete example:

 

Premise 1: Experts predicted that the hospitals in Florida would be overwhelmed if we did not respond with social distancing and other precautions.

Premise 2: People responded to this prediction with social distancing and other precautions.

Premise 3: The hospitals in Florida were not overwhelmed, so the prediction was wrong.

Conclusion: The response was excessive, and we no longer need these precautions.

 

 

While it is (obviously) true that a prediction that turns out to be wrong is wrong, the error is uncritically concluding that this proves that the response based on the prediction need not have been taken (or that we no longer need to keep responding in this way). The prediction assumes we do not respond (or do not respond a certain way) and the response is to address the prediction. If the response is effective, then the predicted outcome would not occur and that is the point of responding. To reason that the “failure” of the prediction shows that the response was mistaken or no longer needed would be a mistake in reasoning. You could be right; but you need to do more than to point to the failed prediction.

As a silly, but effective analogy, imagine we are driving towards a cliff. You make the prediction that if we keep going, we will go off the cliff and die. So, I turn the wheel and avoid the cliff. If backseat Billy gets angry and says that there was no reason to turn the wheel or that I should turn it back because we did not die in a fiery explosion, Billy is falling for this fallacy. After all, if we did not turn, then we would have died. And if we turn back too soon, then we die.

The same applied to COVID-19: by responding effectively to dire predictions, we changed the outcome and the predictions turned out to be wrong. But to infer that the responses were excessive or that we should stop now simply because the results were not as dire as predicted would be an error.

This is not to deny what is obviously true: it is possible to overreact to a pandemic. But making decisions based on the prediction fallacy is a bad idea. There is also another version of this fallacy.

A variation of this fallacy involves inferring the prediction was a bad one because it turned out to be wrong:

 

Premise 1: Prediction P predicted X if we do not do R.

Premise 2: Response R was taken based on prediction P.

Premise 3: X did not happen.

Conclusion: The prediction was wrong about X occurring if we did not do R.

 

While the prediction turned out to be wrong in the sense that the predicted outcome did not occur, this does not disprove the prediction that X would occur without the response when the response occurred. Going back to the car analogy, the prediction that we would die if we drove off the cliff if we do not turn is not disproven if we turn and then do not die. In fact, that is the result we want.

Getting back to COVID-19, the predictions made about what could occur if we did nothing are not disproven by the fact that they did not come true when we did something. So, to infer that these predictions must have been wrong in predicting what would occur if we did nothing would be an error. We do, of course, rationally assess predictions based on outcomes but this assessment should not ignore considering the effect of the response. Sorting out such counterfactual predictions is hard. In complex cases we can probably never prove what would have happened, but good methods can guide us here, which is why we need to go with science and math rather than hunches and feelings.

This fallacy draws considerable force from psychological factors, especially in the case of COVID-19. The response that was taken to the virus came with a high cost and we wanted things to get back to normal—so, ironically, the success of the response made us feel that we could stop quickly or that we did not need such a response.  As always, bad reasoning can lead to bad consequences and in a pandemic, it can hurt and even kill many people.

Stay safe and I will see you in the future.

Years ago, my coverage of medical testing in my critical thinking class was purely theoretical for most of my students. But COVID-19 changed that. One common type of medical test determines whether a person is infected with a disease, such as COVID-19. Another is to determine whether a person has had the infection. While tests are a critical source of information, we need to be aware of the limitations of testing. Since I am not a medical expert, I will not comment on the accuracy of specific methods of testing. Instead, I will look at applying critical thinking to testing.

An ideal medical test would always be accurate and never yield false results. Real medical tests have, for various reasons, less than 100% accuracy and a good test will usually fall into the 90-99% range. This means that a test can always be wrong. So how do you rationally assesses test results?

Intuitively, the chance a person was infected (or not) would seem to be the same as the accuracy of the test. For example, if a COVID-19 test has an accuracy of 90%, then if you test negative, then it seems that there is a 90% chance you do not have COVID. Or, if you test positive, then you might think there is a 90% chance you have COVID. While this seems sensible, it is not correct and arises from a common mistake about conditional probabilities.  I will keep math to a minimum because math, as Barbie said, is hard.

So, suppose that I test positive for COVID and the test is 90% accurate. If I think there is a 90% chance, I have COVID, I am probably wrong and here is why. The mistake is failing to recognize that the probability that X given Y is distinct from the probability of Y given X. In the case of the test, testing positive is the effect of COVID and obviously not the cause. As such, a 90% accurate test for COVID does not mean that 90% of those who test positive (effect) will have COVID (cause). It means that 90% of those who have COVID (cause) will test positive (effect). So, if I have COVID, then there is a 90% chance the test will detect it. The wrong way of looking at it would be to think that if I test positive, then there is a 90% chance I had COVID. So, what is the true chance I have COVID if I test positive on a test that is 90% accurate? The answer is that I do not know. But I do know how to do the math to sort it out.

To know my chance of having COVID I would also need to know the percentage of false positives that occur with the test and, very importantly, the base rate of the infection. The base rate of the infection is the frequency of the cause. Using my made-up test and some made-up numbers, here is how the math would go.

Suppose that the 90% accurate test has a 10% false positive rate and 1% of the population in question is infected. For every 1,000 people in the population:

 

  • 10 people will have COVID
  • 9 of the people with COVID will test positive.
  • 990 people will not have COVID.
  • 99 of the people without COVID will test positive.

 

While there will be 108 positive test results, only 9 of them will have had COVID. So, a person who tests positive has an 8% of having had COVID, not 90%.  In conditional terms and using these made-up numbers, if I have COVID, then there is a 90% chance I will test positive. But If I test positive, then there is an 8% chance I have COVID.

At this point it might be tempting to think that testing is useless, but that would be a mistake. Testing is useful in gathering data about infection rates. Testing is more likely to be accurate in populations with higher rates of COVID infections, but this is a function of statistics rather than a function of testing. To illustrate this, let us run the example again with one change, which is increasing the rate of infection to 10%. For every 1,000 people in the population:

 

  • 100 people will have COVID
  • 90 of the people with COVID will test positive.
  • 900 people will not have COVID.
  • 90 of the people without COVID will test positive.

 

There will be 180 positive test results and 50% of them will have COVID. So, if I test positive for COVID, then there is a 50% chance I have COVID. Again, this is a matter of statistics as the test accuracy, by hypothesis, has not changed. Because of this, testing groups that we know have higher infection rates will give better statistical results that can be useful—but much of the use will be in terms of additional statistical analysis. NPR  provided an excellent discussion of antibody testing for COVID and they even included a calculator that will do the math for you.

In terms of putting your trust in a test, such as an antibody test to determine whether you had COVID or not, it is wise to keep the math in mind. Even if surviving COVID confers some immunity, a positive test might mean an 8% chance that you had COVID. And until we know the rate of an infection, we would essentially be guessing when doing the math. During the pandemic the rational approach might seem odd: you should have assumed you have COVID while also assuming that you have not had COVID. The same will apply to the next pandemic.

While it would be irrational to reject medical claims of health care experts in favor of those made by a ruler, this happened in the last pandemic and will happen again. Why people do this is mainly a matter of psychology, but the likely errors in reasoning are a matter of philosophy.

While those who accept a ruler as a medical authority are falling victim to a fallacious appeal to authority, it is worth considering the specific version of the fallacy being committed. I am calling this fallacy the argument from authoritarian. The error occurs when a person believes a claim simply because it is made by the authoritarian leader they accept. It has this form:

 

Premise 1: Authoritarian leader L makes claim c.

Conclusion: Claim C is true.

 

The fact that an authoritarian leader makes a claim does not provide evidence that supports the claim. It also does not disprove the claim. Accepting or rejecting a claim because it comes from an authoritarian would both be errors. The authoritarian could be right but, as with any logical fallacy, the error lies in the reasoning.

The use of my usual silly math example illustrates why this is bad logic:

 

Premise 1: The dear leader claims that 2+2 =7.

Conclusion: The dear leader is right.

 

At this point, you might be thinking about the consequences someone might suffer from doubting what an authoritarian leader claims. They could be fired, exiled, tortured, or even killed. While that is true, there is a critical distinction between having a rational reason to believe a claim is true and having a pragmatic reason to accept a claim or at least pretend to do so. Fear of retaliation by an authoritarian can provide a practical reason to go along with them but this does not provide evidence. No matter how brutally an authoritarian enforces their view that 2+2=7 and no matter how many people echo his words, 2+2=4. While fear can provide people with motivation to accept an argument from authoritarian, there are other psychological reasons that can drive such bad logic. This takes us to a simplified look at the authoritarian leader type and the authoritarian follower type. The same person can have qualities of both, and everyone has at least some of these traits. The degree to which a person has them is what matters.

An authoritarian leader type is characterized by the belief that they have a special status as a leader. At the extreme, the authoritarian leader believes they are the voice of their followers, and they alone can lead. Or, as Trump put it, “I alone can fix it.” Underlying this is the belief they possess exceptional skills, knowledge and abilities. As Socrates found out, people think they know far more than they do, but the authoritarian leader takes this to extremes and overestimates their abilities. This, as would be expected, leads them to make false claims and mistakes.

Since the authoritarian leader is extremely reluctant to admit their errors and limits, they must be dishonest to the degree they are not delusional and delusional to the degree they are not dishonest.  Because of the need to maintain the lies and delusions about their greatness and success, the authoritarian leader is intolerant of criticism, dissent, and competition. To the extent they can do so, they use coercion against those who would disagree and resort to insults when they cannot intimidate. Because the facts, logic and science would tell against them, they tend to oppose all of these and form many of their beliefs on feelings, biases, and bad logic. They encourage their followers to do the same—in fact, they would not have true followers if no one followed their lead here. 

While an authoritarian leader might have some degree of competence, their excessive overestimation of their abilities and their fear of competent competition (even among those who serve them) will result in regular and often disastrous failures. Maintaining their delusions and lies in the face of failure requires explaining it away. One approach is denial, which is to ignore reality. A second approach is to blame others; the leader is not at fault, because someone else is responsible. One method of doing this is scapegoating, which is finding someone else to bear undeserved blame for the leader’s failings.

For the authoritarian, there is something of a paradox here. They must affirm their greatness and at the same time blame vastly inferior foes who manage to thwart them. These opponents must be both pathetic and exceptionally dangerous, stupid and yet brilliant, incompetent and yet effective and so on for a host of inconsistent qualities.

An authoritarian leader obviously desires followers and fortunately for them, there are people of the authoritarian follower type. While opportunists often make use of authoritarian leaders and assist them, they are not believers. The authoritarian follower believes that their leader is special, that the leader alone can fix things. Thus, the followers must buy into the leaders’ delusions and lies, convincing themselves despite the evidence to the contrary. And this is very dangerous.

Since the leader will tend to fail often, the followers must accept the explanations given to account for them. This requires rejecting facts and logic.  The followers embrace lies and conspiracy theories, whatever supports the narrative of their leader’s greatness. Those who do not agree with the leader are not merely wrong but are enemies of the leader and thus enemies of the followers.  The claims of those who disagree are rejected out of hand, often with hostility and insults. Thus, the followers tend to isolate themselves epistemically, which is a fancy way of saying that nothing that goes against their view of the leader ever gets in. This motivates a range of fallacies including what I call an accusation of hate.

In the last pandemic, when I tried to discuss COVID-19 with Trump supporters, it almost always ended with them accusing me of hating Trump and their rejecting anything I said that did not match Trump’s claims. I think they were sincere. Like everyone, they tend to believe and reject claims based on how they feel about the source. Since they like Trump, they believed him even when the evidence contradicted his claims. Since I disagreed with Trump’s false claims, they concluded I must have hated Trump. Otherwise, I would believe his claims. As they saw it, this also meant that I was wrong. While this makes psychological sense, it is bad logic and can be presented as a fallacy: the accusation of hate. It has this form:

 

Premise 1: Person A rejects Person B’s claim C.

Premise 2: Person A is accused of hating B.

Conclusion: Claim C is true.

 

As my usual silly math example shows, this is bad logic:

 

Premise 1: Dave rejects Adolph’s claim that 2+2=7.

Premise 2: Dave hates Adolph.

Conclusion: So, 2+2=7. 

 

While hating someone would be a biasing factor, this does not disprove the alleged hater’s claim. It can have psychological force since people tend to reject claims made by people they think hate someone they like. This is especially true in the case of authoritarian followers defending their leader.

Since authoritarian leaders are often delusional liars who often, deny these failures and scapegoat others, they are extremely dangerous. The more power they have, the more harm they can do. They are enabled by their followers, which makes them dangerous as well. In a democracy the solution is to vote out the authoritarian and get a leader who does not live in a swamp of lies and delusions. Until then, non-authoritarian leaders must step up to make rational decisions based on truth and good science; otherwise the next pandemic will drive America into ruin while lies and delusions are spun.

During the next pandemic, accurate information will be critical to your wellbeing and even survival. Some sources will mean well but will unintentionally spread misinformation. Malicious sources will be spreading disinformation. While being an expert in a relevant field is the best way to sort out which sources to trust, most of us are not experts in these areas. But we are not helpless. While we cannot become medical experts overnight, you can learn skills for assessing sources.

When you accept a claim based on the (alleged) expertise of a source, you are using an argument from authority. Despite its usefulness it is a relatively weak argument. Because you do not have direct evidence for the claim, you are relying on the source to be both accurate and honest. Despite the inherent weakness in this argument, a true expert is more likely to be right than wrong when making considered claims within their area of expertise. While the argument is usually presented informally, it has the following structure:

 

Premise 1: A is (claimed to be) an authority on subject S.

Premise 2:  A makes claim C about subject S.

Conclusion: Therefore, claim C is true.

 

As an informal example, when you believe what your doctor or HVAC technician claims, you are using an argument from authority.  But how do you know when an alleged authority really is an expert? Fortunately, there are standards you can use even if you know little or nothing about the claim. To the degree that the argument meets the standards, then it is reasonable to accept the conclusion. If the argument does not meet the standards, it would be a fallacy (a mistake in logic) to accept the conclusion. It would also be a fallacy to reject the conclusion because the appeal to authority was fallacious. This is because poor reasoning can still have a true conclusion; rather like how someone can guess the right answer to a math problem. Here are the standards for assessment.

First, the person must have sufficient expertise in the subject. A person’s expertise is determined by their relevant education (formal and otherwise), experience, accomplishments, reputation, and position. These should be carefully assessed to consider how well they establish expertise. For example, a person might be the head of a government agency because of family connections or political loyalty rather than ability or knowledge. The degree of expertise required also varies with the context. For example, someone who has completed college biology courses could be considered an expert when they claim that a virus replicates in living creatures by hijacking the cell mechanism. But a few college courses in biology would not make them an expert in epidemiology.

Second, the claim must be in the person’s area of expertise.  Expertise in one area does not automatically confer expertise in another. For example, being a world-renowned physicist does not automatically make a person an expert on morality or politics. Unfortunately, this is often overlooked or intentionally ignored. Actors and musicians, for example, are often accepted as experts in fields beyond their artistic expertise. Billionaires are also often wrongly regarded as experts in many areas based on the mistaken view that being rich entails broad expertise. This does not mean that their claims outside their field are false, just that they lack the expertise to provide a good reason to accept the claim.

Third there needs to be an adequate degree of agreement among the other experts in the field. If there is no adequate agreement it would be a fallacy to appeal to the disputing experts. This is because for any claim made by an expert there will be a counterclaim by another qualified expert. In such cases appealing to the authorities would be futile.

That said, no field has complete agreement, so a certain degree of dispute is acceptable when using this argument. How much is acceptable is a matter of debate, but the majority view of the qualified experts is what it is rational to believe. While they could turn out to be wrong, they are more likely to be right. Even if there is broad consensus, non-experts often make the mistake of picking a dissenting expert they agree with as being right. This is not good reasoning; agreeing with an expert is not a logical reason to believe they are right.

Fourth, the expert must not be significantly biased. Examples of biasing factors include financial gain, political ideology, sexism, and racism. A person’s credibility is reduced to the degree that they are biased. While everyone has biases, it becomes a problem when the bias is likely to unduly influence the person. For example, a doctor who owns a company that produces anti-viral medication could be biased when making claims about the efficacy of the medication. But while bias is a problem, it would also be a mistake to reject a person’s claim solely because of alleged bias. After all, a person could resist their biases and even a biased person can be right. Going with the anti-viral example, rejecting the doctor’s claim that it works because they can gain from its sale would be an ad hominem fallacy. While unbiased experts can be wrong, an unbiased expert is more credible than a biased expert—other factors being equal. 

Fifth, the area of expertise must be a legitimate area or discipline. While there can be debate about what counts as a legitimate area, there are clear cases. For example, if someone claims to be an expert in magical healing crystals and recommends using magic quartz to ward off Ebola, then it would be unwise to accept their claim.  In contrast, epidemiology is a legitimate field.

Sixth, the authority must be identified. If a person says a claim is true based on an anonymous expert, there is no way to tell if that person is a real expert. This does not make the claim false (to think otherwise would be a fallacy) but without the ability to assess the unnamed expert, you have no way of knowing if they are credible.  In such cases, suspending judgment can be a rational option. As would be expected, unnamed experts are often used on social media, and it is wise to be even more wary about anything on those. It is also wise to be wary of false attributions; for example, someone might circulate false claims and attribute them to a credible expert.

Finally, the expert needs to be honest and trustworthy. While being honest means that a person is saying what they think is true, it does not follow that they are correct. But an honest expert is more credible than a source that is inclined to dishonesty. But to infer that a dishonest source must be wrong would also be an error. After all, the dishonest source might be right this time perhaps while believing they are lying.

While these standards have been presented in terms of assessing individuals, the same standards apply to institutions and groups. While this is true of individuals, you should update your assessments of groups as they change over time. For example, a federal agency that was staffed by experts and headed by and expert would be trustworthy; but if that agency was gutted and had its personal replaced with political loyalists, then it would now lack authority.

While assessing the credibility of sources is always important, the next pandemic will make this a matter of life and death. Those of us who are not epidemiologists or medical professionals must rely on others for information. While some people will provide accurate information, there will also be well-meaning people unintentionally spreading unsupported or even untrue claims. There will also be people knowingly spreading disinformation. Your well-being and even survival will depend on being able to determine which sources are credible and which are best avoided.

There are two types of credibility: rational and rhetorical. While a bit oversimplified, rational credibility means that you should logically believe the source and rhetorical credibility means that you feel you should believe the source. The difference between the two rests on the distinction between logical force and psychological force.

Logical force is objective and is a measure of how well the evidence for a claim supports it. In the case of logical arguments, this is assessed in ways ranging from applying the standards of an inductive argument, to creating a truth table to working through deductive proof. To the degree that a source has rational credibility it is logical to accept the relevant claims from that source.

Psychological force is subjective and is a measure of how much emotional influence something has on a person’s willingness to believe a claim. This is assessed in practical terms: how effective was it in persuading someone to accept the claim? While the logical force of an argument is independent of the audience, psychological force is audience dependent. What might persuade one person to accept a claim might enrage another into rejecting it with extreme prejudice. Political devotion provides an excellent example of the relativity of psychological force. If you present a claim to Democrats and Republicans and attribute it to Trump, you will get different reactions.

Psychological force provides no reason or evidence for a claim.  But it is more effective at persuading people than logical force. To use an analogy, the difference between the two is like the difference between junk food and kale. While junk food is tasty, it lacks nutritional value. While kale is good for you, many people find it unappealing. Because of this distinction, when people ask me how to “win” arguments, I always ask them what they mean by “win.” If they mean “provide proof that my claim is true”, then I say they should use logic. If they mean “get people to feel I am right, whether I am right or not”, then I say they should focus on psychological force. Rhetoric and fallacies (bad logic) have far more psychological force than good logic, which creates no end of problems.

The vulnerability of people to psychological force makes it dangerous during a pandemic. When people assess sources based on how they feel, they are far more likely to accept disinformation and misinformation. This leads to acting on false beliefs which can get people killed. Health and survival during a pandemic depend on being able to correctly assess sources and this requires being able to neutralize (or at least reduce) the influence of psychological force. This is a hard thing to do, especially since the fear and desperate hope created by a pandemic makes people even more vulnerable to psychological force and less trusting of logic. It is my hope that this guide will provide some small assistance to people in the next pandemic.

One step in weakening psychological force is being aware of factors that are logically irrelevant but psychologically powerful. One set of factors consists of qualities that make people appealing but have no logical relevance to whether their claims are credible. One irrelevant factor is the appearance of confidence. A person who makes eye contact, has a firm handshake, is not sweating, and does not laugh nervously seems credible, which is why scammers and liars learn to behave this way. But reflection shows  these are irrelevant to rational credibility. To use my usual silly math example, imagine someone saying “I used to think 2+2=4, but Billy looked me right in the eye and confidently said 2+2=12. So that must be true.” Obviously, there are practical reasons to look confident when making claims, but confidence proves nothing. And lack of confidence disproves nothing. To use a silly example, “I used to think 2+2=4, but Billy seemed nervous and unsure when he said that 2+2=4. So, he must be wrong.”

Rhetorical credibility also arises from qualities that you might look for in a date or friend. These can include physical qualities such as height, weight, attractiveness and style of dress. These also include age, ethnicity, and gender. But these are all logically irrelevant to rational credibility. To use the silly math example, “Billy is tall, handsome, straight, wearing a suit, and white so when he says that 2+2=12, he must be right!” Anyone should recognize that as bad “logic.”  Yet when a source is appealing, people tend to believe them despite the irrelevance of the appeal. One defense is to ask yourself if you would still believe the claim if it was made by someone unappealing to you.

Rhetorical credibility also arises from good qualities that are irrelevant to rational credibility. These include kindness, niceness, friendliness, sincerity, compassion, generosity and other virtues. While someone who is kind and compassionate will usually not lie, this does not entail that they are a credible source. To use a silly example, “Billy is so nice and kind and he says 2+2=12. I had my doubts at first, but how could someone so nice be wrong?” To use a less silly example, a kind person might be misinformed and unwittingly pass on dangerous disinformation with the best of intentions. A defense is to ask yourself if you would still believe the claim if it was made by someone who has bad qualities. But what about honesty? Surely, we should believe what an honest source says.

While it is tempting to see honesty as the same thing as telling the truth, a more accurate definition is that an honest person says what they think is true. They could be honestly making a false claim. A dishonest person will try to pass off as true what they think is untrue. And even dishonest people do not lie all the time. As such, while honesty does have a positive impact on rational credibility and dishonesty has a negative impact, they are not decisive. But an honest source is preferable to a dishonest one. Sorting out honest and dishonest sources can be challenging.

Group affiliation, ideology and other values have a huge impact on how people judge rhetorical credibility. If a claim is made by someone on your side or matches your values, you will probably be inclined to believe it. For example, Trump supporters will tend to believe what Trump says because Trump says it. If a claim is made by the “other side” or goes against your values, then you will tend to reject it. For example, anti-Trump folks will usually doubt what Trump says.  An excellent historical example of how ideology can provide rhetorical credibility is the case of Stalin and Lysenko—by appealing to ideology Lysenko made his false views the foundation of Soviet science. This provides a cautionary tale worth considering. While affiliations and values lead people to engage in motivated “reasoning” it is possible to resist their lure and try to assess the rational credibility of a source.

One defense is to use my stupid math example as a guide: “Trump says that 2+2=12; Trump is my guy so he must be right!” Or “Trump says 2+2=4, but I hate him so he must be wrong.” Another defense is to try to imagine the claim being made by the other side or someone who has different values. For example, a Trump supporter could have  imagined Obama or Clinton making the claims about Hydroxychloroquine that Trump made. As a reverse example, Trump haters could try the same thing. This is not perfect defense but might help.

While this short guide tries to help people avoid falling victim to rhetorical credibility standards are also needed to determine when you should probably trust a source—that is, standards for rational credibility. That is the subject of the next essay.

Critical thinking can save your life, especially during a pandemic of pathogens, disinformation and misinformation. While we are not in a pandemic as this is being written, it is a question of when the next one will arrive. As our government is likely to be unwilling and unable to help us, we need to prepare to face it on our own. Hence, this series on applying critical thinking to pandemics.

Laying aside academic jargon, critical thinking is the rational assessment of a claim to determine whether you should accept it as probably true, reject it as probably false or suspend judgment. People often forget they can suspend judgment but in the face of misinformation and disinformation this is sometimes the best option.

Suppose you saw a  Facebook post that drinking alcohol will protect you from a viral disease, you saw a tweet about how gargling with bleach can kill viruses,  or you heard President Trump extoling the virtues of hydroxychloroquine as a treatment for COVID. How can you rationally assess these claims if you are not a medical expert? Fortunately, critical thinking can help even those whose medical knowledge is limited to what they saw on Grey’s Anatomy.

When a claim is worth assessing, the first step is to see if you can check it against your own observations and see if it fits them. If it does not, then this is a mark against it. If it does, that is a plus. Take the bleach claim as an example. If you look at a bottle of bleach you will see the safety warnings. While it will probably kill viruses, it is dangerous to gargle it. So, it would be best to reject the claim that you should gargle bleach. While your own observations are a good check on claims, they are not infallible and it is wise to critically consider their reliability.

The second step, which usually happens automatically, is to test the claim against your background information. Your background information is all the stuff you have learned over the years. When you get a claim, you match it up against your background information to get a rough assessment of its initial plausibility. This is how likely it seems to be true upon first consideration. The plausibility will be adjusted should you investigate more. As an example, consider the claim about alcohol’s effect on viruses. On the one hand, you probably know alcohol can sterilize things and this raises the plausibility of the claim. But you probably have not heard of people protecting themselves successfully from the flu or cold (which are caused by viruses) by drinking alcohol. Also, you probably have in your background information that the alcohol used to sterilize is poisonous and differs from, for example, whiskey. So, it would be unwise to believe that drinking alcohol is a good way to protect oneself from viruses.  

One major problem here is that everyone’s background information is full of false beliefs. I know, from experience, that I have had many false beliefs and infer that I still do. I do not know which ones are false, if I did, I would stop believing them. Because of our fallibility, this method has a serious flaw: we could accept or reject a claim because of a false belief. This is why it is a good idea to assess our beliefs. We can only be rationally confident of our assessment of a claim to the degree that our background information is likely to be correct. The more you know, the better you will be at making such assessments.

While having false beliefs can cause errors, people are also affected by biases and fallacies. Since there is a multitude of both, I will only briefly discuss a few that are relevant during a pandemic. People tend to be biased in favor of their group, be it their religion, political party or sports team. Bias inclines people to believe claims made by members of their group, which fuels the group think fallacy: believing a claim is true because you are proud of your group and someone in your group made it. This can also be a version of the appeal to belief fallacy in which one believes claim is true because their group believes that it is true. While pandemics cross party lines, the last pandemic was politicized. Because of this, people with strong partisanship often believe what their side says and disbelieve the other side. But believing based on group membership is bad logic and can get you killed. As such, making rational assessments in a pandemic (or anytime) requires fighting biases and considering claims as objectively as possible. This is a hard thing to do, but it can save your life.

As pandemics are terrifying and people want to have hope, it is wise to be on guard against appeal to fear (scare tactics) and wishful thinking. An appeal to fear occurs when a claim is accepted as true because of fear rather than based on evidence or reasons. For example, if someone believes that migrants are criminals because a news channel made them afraid of migrants, they have fallen for scare tactics.

 It needs to be noted that something that is frightening can also serve as evidence. To illustrate, Ebola is scary because it can kill you. So, reasoning that because it is deadly, you should avoid it is good logic. While emotions affect belief, they are logically neutral whether the feeling is fear or hope.

Wishful thinking is a classic fallacy in which a person believes a claim because they want it to be true (or reject one because they want it to be false). When a pandemic is taking place, it is natural for people to engage in wishful thinking—to believe claims because they want them to be true. For example, a person might think that they will not get sick based on wishful thinking, which can be very dangerous to themselves and others. As another example, someone might believe drinking alcohol will protect them from COVID because they want it to be true; but this is not true. The defense against wishful thinking is not to give up all hope, but to avoid taking hope as evidence. This can be hard to do—objectively considering claims during a pandemic can be depressing. But wishful thinking can get you and others killed. In the next essay, I will discuss how to assess experts and alleged experts.