During the COVID-19 pandemic, some politicians argued that America should be reopened because the dire predictions about COVID turned out to be wrong. On the face of it, this appears to be good reasoning that we could use in the next pandemic: if things are not as bad as predicted, we can start reopening sooner than predicted. To use an analogy, if a fire was predicted to destroy your house, but it only burned your garage, then it would make sense to move back in and rebuild the garage. While this reasoning is appealing, it also can be a trap. Here is how the trap works.

Some politicians and pundits pointed out that the dire predictions about COVID did not come true. For example, the governor of Florida said that since the hospitals were not overwhelmed as predicted, it was a good idea for them to return to profitable elective surgery. He also, like some other Republican governors, wanted to reopen very quickly. This reasoning seemed sensible: the pandemic was not as bad as predicted, so we can quickly reopen. There are also those who sneered at the dire predictions and were upset at what they saw as excessive precautions. This can also seem sensible: the experts predicted a terrible outcome for COVID-19, but they were wrong. We overreacted and should have rolled back the precautions when the predictions did not come true. But would this be a wise strategy for the next pandemic?

While it is reasonable to consider whether the precautions were excessive, there is a tempting fallacy that needs to be avoided. This is the prediction fallacy. It occurs when someone uncritically rejects a prediction and responses to that prediction when the outcome of a prediction turns out to be false. The error in the logic occurs because the person fails to consider what should be obvious: if a prediction is responded to effectively, then the prediction is going to be “wrong.” The form of the fallacy is this:

 

Premise 1: Prediction P predicted X if we do not do R.

Premise 2: Response R was taken based on prediction P.

Premise 3: X did not happen, so prediction P is wrong.

Conclusion: We should not have taken Response R (or no longer need to take Response R).

 

To use a concrete example:

 

Premise 1: Experts predicted that the hospitals in Florida would be overwhelmed if we did not respond with social distancing and other precautions.

Premise 2: People responded to this prediction with social distancing and other precautions.

Premise 3: The hospitals in Florida were not overwhelmed, so the prediction was wrong.

Conclusion: The response was excessive, and we no longer need these precautions.

 

 

While it is (obviously) true that a prediction that turns out to be wrong is wrong, the error is uncritically concluding that this proves that the response based on the prediction need not have been taken (or that we no longer need to keep responding in this way). The prediction assumes we do not respond (or do not respond a certain way) and the response is to address the prediction. If the response is effective, then the predicted outcome would not occur and that is the point of responding. To reason that the “failure” of the prediction shows that the response was mistaken or no longer needed would be a mistake in reasoning. You could be right; but you need to do more than to point to the failed prediction.

As a silly, but effective analogy, imagine we are driving towards a cliff. You make the prediction that if we keep going, we will go off the cliff and die. So, I turn the wheel and avoid the cliff. If backseat Billy gets angry and says that there was no reason to turn the wheel or that I should turn it back because we did not die in a fiery explosion, Billy is falling for this fallacy. After all, if we did not turn, then we would have died. And if we turn back too soon, then we die.

The same applied to COVID-19: by responding effectively to dire predictions, we changed the outcome and the predictions turned out to be wrong. But to infer that the responses were excessive or that we should stop now simply because the results were not as dire as predicted would be an error.

This is not to deny what is obviously true: it is possible to overreact to a pandemic. But making decisions based on the prediction fallacy is a bad idea. There is also another version of this fallacy.

A variation of this fallacy involves inferring the prediction was a bad one because it turned out to be wrong:

 

Premise 1: Prediction P predicted X if we do not do R.

Premise 2: Response R was taken based on prediction P.

Premise 3: X did not happen.

Conclusion: The prediction was wrong about X occurring if we did not do R.

 

While the prediction turned out to be wrong in the sense that the predicted outcome did not occur, this does not disprove the prediction that X would occur without the response when the response occurred. Going back to the car analogy, the prediction that we would die if we drove off the cliff if we do not turn is not disproven if we turn and then do not die. In fact, that is the result we want.

Getting back to COVID-19, the predictions made about what could occur if we did nothing are not disproven by the fact that they did not come true when we did something. So, to infer that these predictions must have been wrong in predicting what would occur if we did nothing would be an error. We do, of course, rationally assess predictions based on outcomes but this assessment should not ignore considering the effect of the response. Sorting out such counterfactual predictions is hard. In complex cases we can probably never prove what would have happened, but good methods can guide us here, which is why we need to go with science and math rather than hunches and feelings.

This fallacy draws considerable force from psychological factors, especially in the case of COVID-19. The response that was taken to the virus came with a high cost and we wanted things to get back to normal—so, ironically, the success of the response made us feel that we could stop quickly or that we did not need such a response.  As always, bad reasoning can lead to bad consequences and in a pandemic, it can hurt and even kill many people.

Stay safe and I will see you in the future.

Years ago, my coverage of medical testing in my critical thinking class was purely theoretical for most of my students. But COVID-19 changed that. One common type of medical test determines whether a person is infected with a disease, such as COVID-19. Another is to determine whether a person has had the infection. While tests are a critical source of information, we need to be aware of the limitations of testing. Since I am not a medical expert, I will not comment on the accuracy of specific methods of testing. Instead, I will look at applying critical thinking to testing.

An ideal medical test would always be accurate and never yield false results. Real medical tests have, for various reasons, less than 100% accuracy and a good test will usually fall into the 90-99% range. This means that a test can always be wrong. So how do you rationally assesses test results?

Intuitively, the chance a person was infected (or not) would seem to be the same as the accuracy of the test. For example, if a COVID-19 test has an accuracy of 90%, then if you test negative, then it seems that there is a 90% chance you do not have COVID. Or, if you test positive, then you might think there is a 90% chance you have COVID. While this seems sensible, it is not correct and arises from a common mistake about conditional probabilities.  I will keep math to a minimum because math, as Barbie said, is hard.

So, suppose that I test positive for COVID and the test is 90% accurate. If I think there is a 90% chance, I have COVID, I am probably wrong and here is why. The mistake is failing to recognize that the probability that X given Y is distinct from the probability of Y given X. In the case of the test, testing positive is the effect of COVID and obviously not the cause. As such, a 90% accurate test for COVID does not mean that 90% of those who test positive (effect) will have COVID (cause). It means that 90% of those who have COVID (cause) will test positive (effect). So, if I have COVID, then there is a 90% chance the test will detect it. The wrong way of looking at it would be to think that if I test positive, then there is a 90% chance I had COVID. So, what is the true chance I have COVID if I test positive on a test that is 90% accurate? The answer is that I do not know. But I do know how to do the math to sort it out.

To know my chance of having COVID I would also need to know the percentage of false positives that occur with the test and, very importantly, the base rate of the infection. The base rate of the infection is the frequency of the cause. Using my made-up test and some made-up numbers, here is how the math would go.

Suppose that the 90% accurate test has a 10% false positive rate and 1% of the population in question is infected. For every 1,000 people in the population:

 

  • 10 people will have COVID
  • 9 of the people with COVID will test positive.
  • 990 people will not have COVID.
  • 99 of the people without COVID will test positive.

 

While there will be 108 positive test results, only 9 of them will have had COVID. So, a person who tests positive has an 8% of having had COVID, not 90%.  In conditional terms and using these made-up numbers, if I have COVID, then there is a 90% chance I will test positive. But If I test positive, then there is an 8% chance I have COVID.

At this point it might be tempting to think that testing is useless, but that would be a mistake. Testing is useful in gathering data about infection rates. Testing is more likely to be accurate in populations with higher rates of COVID infections, but this is a function of statistics rather than a function of testing. To illustrate this, let us run the example again with one change, which is increasing the rate of infection to 10%. For every 1,000 people in the population:

 

  • 100 people will have COVID
  • 90 of the people with COVID will test positive.
  • 900 people will not have COVID.
  • 90 of the people without COVID will test positive.

 

There will be 180 positive test results and 50% of them will have COVID. So, if I test positive for COVID, then there is a 50% chance I have COVID. Again, this is a matter of statistics as the test accuracy, by hypothesis, has not changed. Because of this, testing groups that we know have higher infection rates will give better statistical results that can be useful—but much of the use will be in terms of additional statistical analysis. NPR  provided an excellent discussion of antibody testing for COVID and they even included a calculator that will do the math for you.

In terms of putting your trust in a test, such as an antibody test to determine whether you had COVID or not, it is wise to keep the math in mind. Even if surviving COVID confers some immunity, a positive test might mean an 8% chance that you had COVID. And until we know the rate of an infection, we would essentially be guessing when doing the math. During the pandemic the rational approach might seem odd: you should have assumed you have COVID while also assuming that you have not had COVID. The same will apply to the next pandemic.

While it would be irrational to reject medical claims of health care experts in favor of those made by a ruler, this happened in the last pandemic and will happen again. Why people do this is mainly a matter of psychology, but the likely errors in reasoning are a matter of philosophy.

While those who accept a ruler as a medical authority are falling victim to a fallacious appeal to authority, it is worth considering the specific version of the fallacy being committed. I am calling this fallacy the argument from authoritarian. The error occurs when a person believes a claim simply because it is made by the authoritarian leader they accept. It has this form:

 

Premise 1: Authoritarian leader L makes claim c.

Conclusion: Claim C is true.

 

The fact that an authoritarian leader makes a claim does not provide evidence that supports the claim. It also does not disprove the claim. Accepting or rejecting a claim because it comes from an authoritarian would both be errors. The authoritarian could be right but, as with any logical fallacy, the error lies in the reasoning.

The use of my usual silly math example illustrates why this is bad logic:

 

Premise 1: The dear leader claims that 2+2 =7.

Conclusion: The dear leader is right.

 

At this point, you might be thinking about the consequences someone might suffer from doubting what an authoritarian leader claims. They could be fired, exiled, tortured, or even killed. While that is true, there is a critical distinction between having a rational reason to believe a claim is true and having a pragmatic reason to accept a claim or at least pretend to do so. Fear of retaliation by an authoritarian can provide a practical reason to go along with them but this does not provide evidence. No matter how brutally an authoritarian enforces their view that 2+2=7 and no matter how many people echo his words, 2+2=4. While fear can provide people with motivation to accept an argument from authoritarian, there are other psychological reasons that can drive such bad logic. This takes us to a simplified look at the authoritarian leader type and the authoritarian follower type. The same person can have qualities of both, and everyone has at least some of these traits. The degree to which a person has them is what matters.

An authoritarian leader type is characterized by the belief that they have a special status as a leader. At the extreme, the authoritarian leader believes they are the voice of their followers, and they alone can lead. Or, as Trump put it, “I alone can fix it.” Underlying this is the belief they possess exceptional skills, knowledge and abilities. As Socrates found out, people think they know far more than they do, but the authoritarian leader takes this to extremes and overestimates their abilities. This, as would be expected, leads them to make false claims and mistakes.

Since the authoritarian leader is extremely reluctant to admit their errors and limits, they must be dishonest to the degree they are not delusional and delusional to the degree they are not dishonest.  Because of the need to maintain the lies and delusions about their greatness and success, the authoritarian leader is intolerant of criticism, dissent, and competition. To the extent they can do so, they use coercion against those who would disagree and resort to insults when they cannot intimidate. Because the facts, logic and science would tell against them, they tend to oppose all of these and form many of their beliefs on feelings, biases, and bad logic. They encourage their followers to do the same—in fact, they would not have true followers if no one followed their lead here. 

While an authoritarian leader might have some degree of competence, their excessive overestimation of their abilities and their fear of competent competition (even among those who serve them) will result in regular and often disastrous failures. Maintaining their delusions and lies in the face of failure requires explaining it away. One approach is denial, which is to ignore reality. A second approach is to blame others; the leader is not at fault, because someone else is responsible. One method of doing this is scapegoating, which is finding someone else to bear undeserved blame for the leader’s failings.

For the authoritarian, there is something of a paradox here. They must affirm their greatness and at the same time blame vastly inferior foes who manage to thwart them. These opponents must be both pathetic and exceptionally dangerous, stupid and yet brilliant, incompetent and yet effective and so on for a host of inconsistent qualities.

An authoritarian leader obviously desires followers and fortunately for them, there are people of the authoritarian follower type. While opportunists often make use of authoritarian leaders and assist them, they are not believers. The authoritarian follower believes that their leader is special, that the leader alone can fix things. Thus, the followers must buy into the leaders’ delusions and lies, convincing themselves despite the evidence to the contrary. And this is very dangerous.

Since the leader will tend to fail often, the followers must accept the explanations given to account for them. This requires rejecting facts and logic.  The followers embrace lies and conspiracy theories, whatever supports the narrative of their leader’s greatness. Those who do not agree with the leader are not merely wrong but are enemies of the leader and thus enemies of the followers.  The claims of those who disagree are rejected out of hand, often with hostility and insults. Thus, the followers tend to isolate themselves epistemically, which is a fancy way of saying that nothing that goes against their view of the leader ever gets in. This motivates a range of fallacies including what I call an accusation of hate.

In the last pandemic, when I tried to discuss COVID-19 with Trump supporters, it almost always ended with them accusing me of hating Trump and their rejecting anything I said that did not match Trump’s claims. I think they were sincere. Like everyone, they tend to believe and reject claims based on how they feel about the source. Since they like Trump, they believed him even when the evidence contradicted his claims. Since I disagreed with Trump’s false claims, they concluded I must have hated Trump. Otherwise, I would believe his claims. As they saw it, this also meant that I was wrong. While this makes psychological sense, it is bad logic and can be presented as a fallacy: the accusation of hate. It has this form:

 

Premise 1: Person A rejects Person B’s claim C.

Premise 2: Person A is accused of hating B.

Conclusion: Claim C is true.

 

As my usual silly math example shows, this is bad logic:

 

Premise 1: Dave rejects Adolph’s claim that 2+2=7.

Premise 2: Dave hates Adolph.

Conclusion: So, 2+2=7. 

 

While hating someone would be a biasing factor, this does not disprove the alleged hater’s claim. It can have psychological force since people tend to reject claims made by people they think hate someone they like. This is especially true in the case of authoritarian followers defending their leader.

Since authoritarian leaders are often delusional liars who often, deny these failures and scapegoat others, they are extremely dangerous. The more power they have, the more harm they can do. They are enabled by their followers, which makes them dangerous as well. In a democracy the solution is to vote out the authoritarian and get a leader who does not live in a swamp of lies and delusions. Until then, non-authoritarian leaders must step up to make rational decisions based on truth and good science; otherwise the next pandemic will drive America into ruin while lies and delusions are spun.

During the next pandemic, accurate information will be critical to your wellbeing and even survival. Some sources will mean well but will unintentionally spread misinformation. Malicious sources will be spreading disinformation. While being an expert in a relevant field is the best way to sort out which sources to trust, most of us are not experts in these areas. But we are not helpless. While we cannot become medical experts overnight, you can learn skills for assessing sources.

When you accept a claim based on the (alleged) expertise of a source, you are using an argument from authority. Despite its usefulness it is a relatively weak argument. Because you do not have direct evidence for the claim, you are relying on the source to be both accurate and honest. Despite the inherent weakness in this argument, a true expert is more likely to be right than wrong when making considered claims within their area of expertise. While the argument is usually presented informally, it has the following structure:

 

Premise 1: A is (claimed to be) an authority on subject S.

Premise 2:  A makes claim C about subject S.

Conclusion: Therefore, claim C is true.

 

As an informal example, when you believe what your doctor or HVAC technician claims, you are using an argument from authority.  But how do you know when an alleged authority really is an expert? Fortunately, there are standards you can use even if you know little or nothing about the claim. To the degree that the argument meets the standards, then it is reasonable to accept the conclusion. If the argument does not meet the standards, it would be a fallacy (a mistake in logic) to accept the conclusion. It would also be a fallacy to reject the conclusion because the appeal to authority was fallacious. This is because poor reasoning can still have a true conclusion; rather like how someone can guess the right answer to a math problem. Here are the standards for assessment.

First, the person must have sufficient expertise in the subject. A person’s expertise is determined by their relevant education (formal and otherwise), experience, accomplishments, reputation, and position. These should be carefully assessed to consider how well they establish expertise. For example, a person might be the head of a government agency because of family connections or political loyalty rather than ability or knowledge. The degree of expertise required also varies with the context. For example, someone who has completed college biology courses could be considered an expert when they claim that a virus replicates in living creatures by hijacking the cell mechanism. But a few college courses in biology would not make them an expert in epidemiology.

Second, the claim must be in the person’s area of expertise.  Expertise in one area does not automatically confer expertise in another. For example, being a world-renowned physicist does not automatically make a person an expert on morality or politics. Unfortunately, this is often overlooked or intentionally ignored. Actors and musicians, for example, are often accepted as experts in fields beyond their artistic expertise. Billionaires are also often wrongly regarded as experts in many areas based on the mistaken view that being rich entails broad expertise. This does not mean that their claims outside their field are false, just that they lack the expertise to provide a good reason to accept the claim.

Third there needs to be an adequate degree of agreement among the other experts in the field. If there is no adequate agreement it would be a fallacy to appeal to the disputing experts. This is because for any claim made by an expert there will be a counterclaim by another qualified expert. In such cases appealing to the authorities would be futile.

That said, no field has complete agreement, so a certain degree of dispute is acceptable when using this argument. How much is acceptable is a matter of debate, but the majority view of the qualified experts is what it is rational to believe. While they could turn out to be wrong, they are more likely to be right. Even if there is broad consensus, non-experts often make the mistake of picking a dissenting expert they agree with as being right. This is not good reasoning; agreeing with an expert is not a logical reason to believe they are right.

Fourth, the expert must not be significantly biased. Examples of biasing factors include financial gain, political ideology, sexism, and racism. A person’s credibility is reduced to the degree that they are biased. While everyone has biases, it becomes a problem when the bias is likely to unduly influence the person. For example, a doctor who owns a company that produces anti-viral medication could be biased when making claims about the efficacy of the medication. But while bias is a problem, it would also be a mistake to reject a person’s claim solely because of alleged bias. After all, a person could resist their biases and even a biased person can be right. Going with the anti-viral example, rejecting the doctor’s claim that it works because they can gain from its sale would be an ad hominem fallacy. While unbiased experts can be wrong, an unbiased expert is more credible than a biased expert—other factors being equal. 

Fifth, the area of expertise must be a legitimate area or discipline. While there can be debate about what counts as a legitimate area, there are clear cases. For example, if someone claims to be an expert in magical healing crystals and recommends using magic quartz to ward off Ebola, then it would be unwise to accept their claim.  In contrast, epidemiology is a legitimate field.

Sixth, the authority must be identified. If a person says a claim is true based on an anonymous expert, there is no way to tell if that person is a real expert. This does not make the claim false (to think otherwise would be a fallacy) but without the ability to assess the unnamed expert, you have no way of knowing if they are credible.  In such cases, suspending judgment can be a rational option. As would be expected, unnamed experts are often used on social media, and it is wise to be even more wary about anything on those. It is also wise to be wary of false attributions; for example, someone might circulate false claims and attribute them to a credible expert.

Finally, the expert needs to be honest and trustworthy. While being honest means that a person is saying what they think is true, it does not follow that they are correct. But an honest expert is more credible than a source that is inclined to dishonesty. But to infer that a dishonest source must be wrong would also be an error. After all, the dishonest source might be right this time perhaps while believing they are lying.

While these standards have been presented in terms of assessing individuals, the same standards apply to institutions and groups. While this is true of individuals, you should update your assessments of groups as they change over time. For example, a federal agency that was staffed by experts and headed by and expert would be trustworthy; but if that agency was gutted and had its personal replaced with political loyalists, then it would now lack authority.

While assessing the credibility of sources is always important, the next pandemic will make this a matter of life and death. Those of us who are not epidemiologists or medical professionals must rely on others for information. While some people will provide accurate information, there will also be well-meaning people unintentionally spreading unsupported or even untrue claims. There will also be people knowingly spreading disinformation. Your well-being and even survival will depend on being able to determine which sources are credible and which are best avoided.

There are two types of credibility: rational and rhetorical. While a bit oversimplified, rational credibility means that you should logically believe the source and rhetorical credibility means that you feel you should believe the source. The difference between the two rests on the distinction between logical force and psychological force.

Logical force is objective and is a measure of how well the evidence for a claim supports it. In the case of logical arguments, this is assessed in ways ranging from applying the standards of an inductive argument, to creating a truth table to working through deductive proof. To the degree that a source has rational credibility it is logical to accept the relevant claims from that source.

Psychological force is subjective and is a measure of how much emotional influence something has on a person’s willingness to believe a claim. This is assessed in practical terms: how effective was it in persuading someone to accept the claim? While the logical force of an argument is independent of the audience, psychological force is audience dependent. What might persuade one person to accept a claim might enrage another into rejecting it with extreme prejudice. Political devotion provides an excellent example of the relativity of psychological force. If you present a claim to Democrats and Republicans and attribute it to Trump, you will get different reactions.

Psychological force provides no reason or evidence for a claim.  But it is more effective at persuading people than logical force. To use an analogy, the difference between the two is like the difference between junk food and kale. While junk food is tasty, it lacks nutritional value. While kale is good for you, many people find it unappealing. Because of this distinction, when people ask me how to “win” arguments, I always ask them what they mean by “win.” If they mean “provide proof that my claim is true”, then I say they should use logic. If they mean “get people to feel I am right, whether I am right or not”, then I say they should focus on psychological force. Rhetoric and fallacies (bad logic) have far more psychological force than good logic, which creates no end of problems.

The vulnerability of people to psychological force makes it dangerous during a pandemic. When people assess sources based on how they feel, they are far more likely to accept disinformation and misinformation. This leads to acting on false beliefs which can get people killed. Health and survival during a pandemic depend on being able to correctly assess sources and this requires being able to neutralize (or at least reduce) the influence of psychological force. This is a hard thing to do, especially since the fear and desperate hope created by a pandemic makes people even more vulnerable to psychological force and less trusting of logic. It is my hope that this guide will provide some small assistance to people in the next pandemic.

One step in weakening psychological force is being aware of factors that are logically irrelevant but psychologically powerful. One set of factors consists of qualities that make people appealing but have no logical relevance to whether their claims are credible. One irrelevant factor is the appearance of confidence. A person who makes eye contact, has a firm handshake, is not sweating, and does not laugh nervously seems credible, which is why scammers and liars learn to behave this way. But reflection shows  these are irrelevant to rational credibility. To use my usual silly math example, imagine someone saying “I used to think 2+2=4, but Billy looked me right in the eye and confidently said 2+2=12. So that must be true.” Obviously, there are practical reasons to look confident when making claims, but confidence proves nothing. And lack of confidence disproves nothing. To use a silly example, “I used to think 2+2=4, but Billy seemed nervous and unsure when he said that 2+2=4. So, he must be wrong.”

Rhetorical credibility also arises from qualities that you might look for in a date or friend. These can include physical qualities such as height, weight, attractiveness and style of dress. These also include age, ethnicity, and gender. But these are all logically irrelevant to rational credibility. To use the silly math example, “Billy is tall, handsome, straight, wearing a suit, and white so when he says that 2+2=12, he must be right!” Anyone should recognize that as bad “logic.”  Yet when a source is appealing, people tend to believe them despite the irrelevance of the appeal. One defense is to ask yourself if you would still believe the claim if it was made by someone unappealing to you.

Rhetorical credibility also arises from good qualities that are irrelevant to rational credibility. These include kindness, niceness, friendliness, sincerity, compassion, generosity and other virtues. While someone who is kind and compassionate will usually not lie, this does not entail that they are a credible source. To use a silly example, “Billy is so nice and kind and he says 2+2=12. I had my doubts at first, but how could someone so nice be wrong?” To use a less silly example, a kind person might be misinformed and unwittingly pass on dangerous disinformation with the best of intentions. A defense is to ask yourself if you would still believe the claim if it was made by someone who has bad qualities. But what about honesty? Surely, we should believe what an honest source says.

While it is tempting to see honesty as the same thing as telling the truth, a more accurate definition is that an honest person says what they think is true. They could be honestly making a false claim. A dishonest person will try to pass off as true what they think is untrue. And even dishonest people do not lie all the time. As such, while honesty does have a positive impact on rational credibility and dishonesty has a negative impact, they are not decisive. But an honest source is preferable to a dishonest one. Sorting out honest and dishonest sources can be challenging.

Group affiliation, ideology and other values have a huge impact on how people judge rhetorical credibility. If a claim is made by someone on your side or matches your values, you will probably be inclined to believe it. For example, Trump supporters will tend to believe what Trump says because Trump says it. If a claim is made by the “other side” or goes against your values, then you will tend to reject it. For example, anti-Trump folks will usually doubt what Trump says.  An excellent historical example of how ideology can provide rhetorical credibility is the case of Stalin and Lysenko—by appealing to ideology Lysenko made his false views the foundation of Soviet science. This provides a cautionary tale worth considering. While affiliations and values lead people to engage in motivated “reasoning” it is possible to resist their lure and try to assess the rational credibility of a source.

One defense is to use my stupid math example as a guide: “Trump says that 2+2=12; Trump is my guy so he must be right!” Or “Trump says 2+2=4, but I hate him so he must be wrong.” Another defense is to try to imagine the claim being made by the other side or someone who has different values. For example, a Trump supporter could have  imagined Obama or Clinton making the claims about Hydroxychloroquine that Trump made. As a reverse example, Trump haters could try the same thing. This is not perfect defense but might help.

While this short guide tries to help people avoid falling victim to rhetorical credibility standards are also needed to determine when you should probably trust a source—that is, standards for rational credibility. That is the subject of the next essay.

Critical thinking can save your life, especially during a pandemic of pathogens, disinformation and misinformation. While we are not in a pandemic as this is being written, it is a question of when the next one will arrive. As our government is likely to be unwilling and unable to help us, we need to prepare to face it on our own. Hence, this series on applying critical thinking to pandemics.

Laying aside academic jargon, critical thinking is the rational assessment of a claim to determine whether you should accept it as probably true, reject it as probably false or suspend judgment. People often forget they can suspend judgment but in the face of misinformation and disinformation this is sometimes the best option.

Suppose you saw a  Facebook post that drinking alcohol will protect you from a viral disease, you saw a tweet about how gargling with bleach can kill viruses,  or you heard President Trump extoling the virtues of hydroxychloroquine as a treatment for COVID. How can you rationally assess these claims if you are not a medical expert? Fortunately, critical thinking can help even those whose medical knowledge is limited to what they saw on Grey’s Anatomy.

When a claim is worth assessing, the first step is to see if you can check it against your own observations and see if it fits them. If it does not, then this is a mark against it. If it does, that is a plus. Take the bleach claim as an example. If you look at a bottle of bleach you will see the safety warnings. While it will probably kill viruses, it is dangerous to gargle it. So, it would be best to reject the claim that you should gargle bleach. While your own observations are a good check on claims, they are not infallible and it is wise to critically consider their reliability.

The second step, which usually happens automatically, is to test the claim against your background information. Your background information is all the stuff you have learned over the years. When you get a claim, you match it up against your background information to get a rough assessment of its initial plausibility. This is how likely it seems to be true upon first consideration. The plausibility will be adjusted should you investigate more. As an example, consider the claim about alcohol’s effect on viruses. On the one hand, you probably know alcohol can sterilize things and this raises the plausibility of the claim. But you probably have not heard of people protecting themselves successfully from the flu or cold (which are caused by viruses) by drinking alcohol. Also, you probably have in your background information that the alcohol used to sterilize is poisonous and differs from, for example, whiskey. So, it would be unwise to believe that drinking alcohol is a good way to protect oneself from viruses.  

One major problem here is that everyone’s background information is full of false beliefs. I know, from experience, that I have had many false beliefs and infer that I still do. I do not know which ones are false, if I did, I would stop believing them. Because of our fallibility, this method has a serious flaw: we could accept or reject a claim because of a false belief. This is why it is a good idea to assess our beliefs. We can only be rationally confident of our assessment of a claim to the degree that our background information is likely to be correct. The more you know, the better you will be at making such assessments.

While having false beliefs can cause errors, people are also affected by biases and fallacies. Since there is a multitude of both, I will only briefly discuss a few that are relevant during a pandemic. People tend to be biased in favor of their group, be it their religion, political party or sports team. Bias inclines people to believe claims made by members of their group, which fuels the group think fallacy: believing a claim is true because you are proud of your group and someone in your group made it. This can also be a version of the appeal to belief fallacy in which one believes claim is true because their group believes that it is true. While pandemics cross party lines, the last pandemic was politicized. Because of this, people with strong partisanship often believe what their side says and disbelieve the other side. But believing based on group membership is bad logic and can get you killed. As such, making rational assessments in a pandemic (or anytime) requires fighting biases and considering claims as objectively as possible. This is a hard thing to do, but it can save your life.

As pandemics are terrifying and people want to have hope, it is wise to be on guard against appeal to fear (scare tactics) and wishful thinking. An appeal to fear occurs when a claim is accepted as true because of fear rather than based on evidence or reasons. For example, if someone believes that migrants are criminals because a news channel made them afraid of migrants, they have fallen for scare tactics.

 It needs to be noted that something that is frightening can also serve as evidence. To illustrate, Ebola is scary because it can kill you. So, reasoning that because it is deadly, you should avoid it is good logic. While emotions affect belief, they are logically neutral whether the feeling is fear or hope.

Wishful thinking is a classic fallacy in which a person believes a claim because they want it to be true (or reject one because they want it to be false). When a pandemic is taking place, it is natural for people to engage in wishful thinking—to believe claims because they want them to be true. For example, a person might think that they will not get sick based on wishful thinking, which can be very dangerous to themselves and others. As another example, someone might believe drinking alcohol will protect them from COVID because they want it to be true; but this is not true. The defense against wishful thinking is not to give up all hope, but to avoid taking hope as evidence. This can be hard to do—objectively considering claims during a pandemic can be depressing. But wishful thinking can get you and others killed. In the next essay, I will discuss how to assess experts and alleged experts.

The philosophical debate over the power and purpose of the state is ancient, but COVID-19 provided a new context for the discussion. Responding to a pandemic requires a robust state and the emergency can be used to justify expanding state power. While such an expansion can be warranted, people should resist setting aside their critical faculties in the heat of a crisis.

One concern is that a pandemic (or any crisis) will be used to infringe upon liberty without addressing the crisis. While a crisis often claims reason as an early victim, the expansion of state power to protect us should be carefully considered in terms of both the loss of liberty and its effectiveness in addressing the crisis. An expansion that does not make us safer is unjustified as we would give up liberty in return for nothing. If the expansion of power makes us safer, then we should still weigh the benefits against the cost, although this assessment will vary. For example, someone who is very afraid of a threat will have a different assessment than someone who thinks it is minor or even a hoax. As another example, someone who values one liberty (say the right to keep and bear arms) will see things differently from someone who does not value that liberty.  While a rational assessment will always have a subjective element, a good faith evaluation is critical. Unfortunately, misinformation and disinformation come into play in such assessments. And, of course, emotions will be factors.

While a rational assessment of expanding the power of state is always important, it is even more important during a crisis. This is because people will be heavily influenced by the strong emotions arising from the crisis and politicians will be trying to exploit this opportunity to expand their power. Businesses and individuals will also try to profit from the expansion of power, often at the expense of others. For example, if the state imposed mandatory tracking during a pandemic, tech companies would be eager to exploit this financial opportunity.

It can be objected that during a crisis there is no time for rational, objective assessment and attempting to do so would be foolish and wrong. While a crisis usually requires immediate action, if there is time to expand state power, then there is time to think about whether to do so. I am not advocating dithering about in pointless debate but advocating giving due consideration to the expansion of state power. It would be foolish and wrong to act without thought.

During the last pandemic, the United States suffered because it did not expand the power of the state in a rational manner. Our leaders knew a crisis was on the way, but many of them delayed, hesitated and took small steps rather than acting aggressively. This was a case where speed was important and the failures were not due to a needless expansion of state power, but a failure to exercise power effectively and decisively.

In addition to carefully considering the expansion of the state’s power, one must also consider the duration of the expansion. An expansion of power that might be justified in a crisis is likely to be unwarranted and unnecessary when the crisis ends. Since rulers are rarely inclined to give up an expansion to their power, it is essential to place a clearly defined and automatic limit on any expansion of power. As a crisis might last longer than predicted, there also need to be rules for how they can be renewed. Otherwise, these expansions can become permanent to the detriment of the people.

There is also the concern that expansion of power can create bloat, such as new positions and entire departments. Such bloat can waste resources and cause inefficiency, something is problematic even in normal times. Bureaucracies tend to grow over time rather than shrink, so the expansion must be limited. That said, there is also a risk in reducing the state too much so that it will be unable to address a future crisis (which is what Musk and Trump seem to be doing as this is being written). The challenge is finding the right balance between being too big and too small; to get it just right. As people often discount the future and engage in wishful thinking, it is challenging to convince people to spend resources to address a crisis that might occur or even one that will occur but at an unknown time. Thus, the expansion and reduction of the state should be carefully considered based on a rational assessment of likely future need. Unfortunately, this approach usually does not win elections.

While expanding state power to respond to a crisis is what people most often think of, a state can also respond by reducing its power. For example, rulers might weaken or suspend regulations or protections for citizens. On the positive side, weakening or even suspending some regulations can be beneficial. For example, during the next pandemic there will be a need to rapidly expand hospitals, so it would be reasonable to suspend or weaken some rules that would impede this. As another example, a need for test kits and treatments can justify weakening or suspending some regulations that would slow things down. Doing so is not without risks but can be justified as one justifies how ambulances drive: going fast and breaking the normal traffic rules creates a danger, but this is supposed to be outweighed by the need for speed.

Just as the expansion of the state must be justified, assessed and kept on a time limit, the same applies to reducing the state. There are obvious concerns that weakening or suspending regulations could do more harm than good. There is also the concern that the unethical will exploit the situation in harmful ways. For example, an unethical pharmaceutical company might exploit weakened regulations to maximize profits. As another example, tech companies might exploit the weakening of privacy laws to gather data they can monetize in harmful ways. Planning for likely crises is what good leaders do; perhaps some will emerge in the next pandemic.

Anyone familiar with sports knows that if team members don’t work together, things will go badly. So good athletes set aside internal conflicts when on the field and come together to win. This does not mean that an athlete should accept anything a teammate might do without complaint. For example, a good athlete would not allow a teammate to cheat or a coach to abuse athletes. As another example, a good athlete would not tolerate a teammate committing domestic violence or engage in dog fighting. While we belong to various competing teams, such as nations, during a pandemic we should all be on the same team since we are playing a deadly game of humans versus pathogens.

Since we should be on the same team during a pandemic, we should set aside our differences and work towards victory. If we fight, bicker and compete against each other, we are hurting the team. If we cooperate, we will help Team Human. As with sports, the more power a person has, the more important it is that they work with the team and set aside less important concerns at least for the duration of the game. While it would be unreasonable to think everyone will be a good team player, there is still the expectation that team members will not try to cause needless conflict or interfere with the effort to win. Unfortunately for the world, there will be people who are bad at being team players, and even some who will actively  cause harm during the next pandemic.

While there are examples outside of the United States, I am an American and have some responsibility for my leaders and fellow citizens. During the last pandemic, Trump was president and can be seen as the head coach of the team America. He should have directed the team to victory, inspired the players and done his job properly.

As noted in other essays, rather than being honest about the facts of the pandemic, Trump and his allies downplayed it and then floated stories about hoaxes. Rather than listen to medical experts, Trump and his fellows spread disinformation and misinformation. Trump and his fellows also delayed our response to the virus, something that cost us dearly. What Trump and his fellows should have done is play for Team Human.  To use an analogy, Trump was like a coach who refused to acknowledge that an opposing team was even on the field. Like a bad coach, Trump insisted his team would not need to practice and prepare, that it would be an easy win. And he lied to the team.

During the pandemic, Trump was consistently Trump.  First, he  engaged in conflicts with governors. Part of the problem was that Trump saw himself as making business deals rather than being the leader of a country in crisis. Another part of the problem is that Trump apparently cannot avoid petty fights. He takes things very personally, something that has generally not been true about other American presidents. For example, while Bush was criticized about his handling of Katrina, Bush did not withhold help because  governors failed to appease his ego. To continue the team analogy, Trump was a like a coach who retaliates against the assistant coaches if they  do not appease his ego. Criticism, however legitimate, was met with hostility and punished. This actively harmed the team.

It could be objected that the governors were also to blame. They had a responsibility to work with his flaws to get what their state needed. So, if the governor of NY needed to praise Trump to keep him from vindictively denying the state full assistance, then he must praise Trump. While this makes pragmatic sense, it is morally horrific. In a democratic country it is not the duty of governors or citizens to appease the ego of the president to get them to do their job. It is the duty of the president to do their job, even in the face of criticism. That is how a responsible government is supposed to work. If a leader cannot step up and do the job, they should step aside. Going back to the team analogy, if a narcissistic coach is damaging the team, the solution is not for the assistant coaches to work harder to appease his ego. The solution is to get a new coach.

Second, Trump advanced the conspiracy theory that medical professionals are stealing protective equipment, citing an unnamed distributor who (allegedly) claimed that the hospital was buying too much equipment for its needs. Pushing this conspiracy theory was damaging. Trump  likes conspiracy theories and often used them to shift blame from himself. But this does the team no good. Going back to the analogy, this would be like the head coach falsely accusing team trainers of stealing supplies and blaming them for his failure to ensure that the supplies would be available for the big game.

Trump supporters might, at this point, accuse me of hypocrisy: “How can you speak of unity while criticizing Trump?” The first reply is from basic logic: even if I were a hypocrite, this would not refute my claims. To think otherwise would be to fall victim to the ad hominem tu quoque. One version of this fallacy involves concluding that because a person’s actions are inconsistent with their claim, their claim must be wrong. But this is bad logic. For example, suppose that Bill claimed adultery is wrong and then committed adultery. This would show that he was a hypocrite but would not disprove his claim.

The second reply is that my view is that we should have critical unity. This is not uncritical unity in which people are expected to just go along with whatever the leaders say and do. Uncritical unity can be worse than a lack of unity. For example, imagine if everyone simply went along with Trump’s initial claims about the virus and no one ever pushed back against his misinformation and disinformation. Things would have been much worse. As another example, imagine that during the next pandemic a “radical leftist” state government legally seized  the property of the rich to distribute the resources to help people survive. Trump supporters would obviously not respond by saying “well, we must unite behind our leaders” and go along with this.

The critical part of critical unity in a crisis does need to have limits. The criticism should be grounded in truth, based in principle and aimed at addressing real problems. Criticizing Trump’s disinformation, misinformation, conspiracy theories, and so on while urging unity is critical unity. I apply the same standards across the political spectrum. So, for example, if a Democratic leader spreads pandemic disinformation or refuses to do their job because they are spatting with a Republican, then I would be critical of them. I will also be supportive across the spectrum when leaders are stepping up and doing the best they can. For example, I disagreed with Ohio Governor Mike DeWine on some things (although we both went to college in Ohio) but I credited him for his serious response to COVID-19.

The lesson here is that we need to have unity in times of crisis (which is obvious), but it would be unwise to have unthinking and uncritical unity (which is equally obvious). While we should work with our leaders, they need to prove worthy of our uniting under their leadership. Trump served as paradigm example of how a leader can actively divide rather than unify in a time of crisis. If he or a similar person (be they a Democrat or Republican) is president during the next pandemic, we can expect things to go just as badly.

As the COVID-19 pandemic played out, Trump wavered on social distancing. One reason was that billionaires  argued for getting back to work during the pandemic. In  neutral terms, their argument was that the harm of maintaining  social distance would exceed the harm caused by sending people back to work.  This is a classic utilitarian approach in that the right action is the one that creates the greatest good (or the least harm).  Lieutenant Governor  Dan Patrick  advanced a similar, but much harsher argument. On his view, the damage done to the economy by trying to protect people s far outweighs the harm done by putting people at risk. He went so far as to claim that he would be willing to die for the economy and seemed willing to sacrifice other seniors as well. While this was not a mainstream view, it got some traction on Fox News. While some billionaires and Patrick acknowledge a downside to their proposals, some claimed the deaths would be good, another plus rather than a minus.

While it is tempting to dismiss the billionaires as greedy sociopaths who would sacrifice others to add to their vast fortunes, they do raise a moral problem: to what extent should some people be sacrificed for the good of others? We allowed, and rightly praised, sacrifices by health care workers, grocery store workers and many others who risked themselves for others. As with the billionaires’ argument, this can be morally justified on utilitarian grounds: the few put themselves at risk for the good of the many. They kept the rest of us alive by taking care of us, ensuring food remained available and so on. It is inarguable that these sacrifices were good, essential and heroic. It is also inarguable that some of them died because they stayed at their posts and did what must be done to keep the rest of us and our civilization alive. For essential goods and services, the risk seemed morally acceptable; especially from the viewpoint of people who were not themselves in danger. But what about the broader economy?

The billionaires were correct that a badly damaged economy would harm workers. As evidence, consider what happened to workers during the depressions and recessions inflicted upon them in the past. Things were already bad for many before the pandemic and the economic damage made things worse. As such, there was certainly a good argument for getting the economy back on track as soon as possible. But did the utilitarian argument support the billionaires’ view?

When engaging in an honest utilitarian calculation of this sort, the three main factors are values, scope and facts. In the case of the facts, one must honestly consider the consequences.  The scope determines who counts when assessing the harms and benefits. The values determine how one weighs the facts, what is considered good and what is seen as bad. It is a fact that the social distancing practices did economic damage. Many people were unable to work, many businesses closed or operated at minimum levels and so on. It was also a fact that relaxed social distancing to get people back to work resulted in more infections which caused more suffering and death. The billionaires and those who disagree with them agreed on these facts; but they disagreed about matters of scope (who counts) and value (what counts more). The billionaires showed no concern for the well-being of workers and it would be absurd to think they suddenly started to care. As such, the scope of their concern was, at most, their economic class of billionaires. In terms of values, the billionaires value money, more so than the well-being of workers (otherwise they would provide better pay and benefits). As such, their argument made sense to them: relaxing the restrictions benefited them financially and the harm would, as always, be suffered by other people. Those who think that everyone counts and who value life and health over profits for billionaires saw the matter differently.

It could be objected that while the billionaires are interested in their profits, they are also correct that workers would have been hurt more by the ongoing economic damage. As such, it was right to relax the restrictions because it was also better for the workers.  There are two main replies to this argument. The first reply is to argue that the billionaires were wrong in their assessment: even in their economic terms, relaxing the restrictions caused more economic damage than keeping them in place. To use an analogy, imagine a business in a large building that is on fire. One could argue that having the fire trucks pump water into the building will do a lot of damage and that the fire should be allowed to burn out while employees continue to work. But this can be countered by pointing out that allowing the building to burn will do far more damage in the long term and kill more people.  As such, unless the goal is short-term profits and long-term disaster, then it would have been best to keep social distancing in place until it was medically unnecessary.

The second reply is that people suffered, as they have for a long time, because of the economic and social structures we have constructed. We had vast resources to mitigate the harm that was done—the problem is that these resources were (and are) hyper concentrated into the hands of a few and most people lacked the resources to endure the pandemic on their own (and many lacked the resources to endure “normal” life before the pandemic). The truth is that we could have gotten through the economic harm of the pandemic better if we had been more willing to share the resources and wealth that we all created. It was ironic that the billionaires had a fix on hand for many of the harms they predicted: the economic and social structures could have been radically changed for the good of us all, rather than focused on the good of the elites

The lesson I hope we learned here is that the sacrifices of those in essential areas, like those working to provide food and health care, are morally justified and laudable. Another lesson is that the sacrifices extracted from the many by the few to expand their wealth are neither justified nor laudable. What is perhaps more horrifying than the billionaires’ view that people should die for the economy is that they believe they can make such statements in public with impunity and without fear of consequences. I hope that more people will see this for what it is, and they will work to change the world. Unfortunately, many have chosen the side of the billionaires once again and now they openly rule the oligarchy of America.

As COVID-19 ravaged humanity, xenophobia and racism remained alive and well. For example, an Iranian leader  played on fears of America and Israel. He advanced, without evidence, the claim that the virus was created specifically to target Iranians. In addition to conspiracy theories that the Chinese engineered the virus (either to reduce their own population or for use against other nations) there was also a worldwide rise in xenophobia and racism against Asians.

One reason for the xenophobia and racism is that people were looking for a visible enemy upon which to take out their fear and anger. Many people felt helpless and afraid during the pandemic and as humans are inclined to focus on other humans as threats, there was a rise in xenophobia and racism. People are also inclined to seek an intelligence behind dangers, as they did when they attributed natural disasters to gods. Since humans suffer from in group bias and evil leaders feed xenophobia and racism, it is no surprise that people are sought a scapegoat for the  crisis: someone must be to blame. Someone must pay.

The United States, with a long tradition of racism against Asians, saw an increase in xenophobia and racism. While most incidents were limited to verbal hostility, racism in the context of disease raises serious concerns. The United States has a history of weaponizing racism in the context of diseases and we should be on guard against this, because leaders try to appeal to their base and divert attention away from their failings. An example of an American leader’s effort to use xenophobia and racism is Donald Trump using the term “Chinese virus” in place of “coronavirus” or “COVID-19.”

Trump did have excellent, albeit evil, reasons to use these terms. One is that it appeals to parts of his base. This dog whistle sends the message he is speaking to them.  A second reason is that it shifted blame from Trump’s inept and harmful early handling of the pandemic. By presenting it as a Chinese virus Trump created the appearance the threat is the responsibility of a foreign power (and people) and attempted to mitigate his responsibility. Third, it helped create an “us versus them” mentality, with the “them” being other people rather than the virus. Unfortunately, while Trump gained some apparent advantages from this approach, it came with a high cost.

There are those who will defend Trump and take issue with my criticism of him. My first response is that Trump is just an example for the problem of xenophobia and racism. If a Trump defender claims he was not engaged in any racism or xenophobia, then I would refer to the United States being blamed by other for the virus. I suspect a Trump supporter would agree that the xenophobia of other countries towards the United States was not helpful and was, in fact, detrimental.

My second response is that Trump engaged in in open racism and xenophobia. He used the well-worn xenophobic and racist trope of the foreign disease and the diseased foreigner—which was also used in the racism aimed at the allegedly diseased caravans heading towards the United States from the south. That Trump’s defenders had to engage in relentless efforts to explain away his seemingly racist claims undercuts their own case. One would have needed to argue Trump unintentionally but constantly used racist tropes and language. While not impossible, it does strain the boundaries of possibility.

Another piece of evidence is that Trump used his infamous sharpie to cross out “Corona” in his speech and replace it with “Chinese”, showing his use was intended, rather than a slip.  His defenders could engage in verbal gymnastics to explain this. One strategy was to argue Trump used the phrase “Chinese virus” as “Spanish flu” was used. While this approach has some appeal, using the phrase “Spanish flu” is also problematic. Labeling a disease with a specific country or ethnicity tends to lead to stigma and racism. As such, using the “Spanish Flu” defense is like defending the use of “wetback” by saying that people also used “wop.”

A second strategy is to argue that Trump was just referring to where it came from and, for bonus points, one can point out that it was originally called the “Wu Han virus.” One can say that it cannot have been racist or xenophobic for Trump to use “Chinese virus” because the Chinese used “Wu Han virus.” The easy and obvious reply is that the use of the term “Wu Han virus” was also seen as problematic, for the same reasons that “Spanish flu” and “Chinese virus” are problematic. To use an analogy, this would be like a Chinese leader talking about “Caucasian flu” and saying that was just fine because, for example, Americans first started using a term like “Connecticut flu” when the disease first appeared in Connecticut. Since Trump decided to refer to it as the “Chinese flu” and there are no good reasons to use that term, the best explanation is the obvious one: Trump used a xenophobic and racist dog whistle, cashing in on the well-worn trope of the diseased foreigner and the foreign disease. For those who would try to present this in a positive light, one must ask why do this? And why defend him against the umpteenth reasonable charge of racism and xenophobia?

As noted above, there was already racism and xenophobia against Asians (and Asian Americans) and Trump’s insistence on calling it the “Chinese virus” was likely to have contributed to the uptick in such incidents. Using this sort of label also put the United States at odds with other counties. And other countries blaming us had the same effect. Having Americans turn against other Americans is harmful, especially during a crisis in which community unity is an important part of our survival toolkit. It is also harmful to create conflict between nations when cooperation will improve our response to pandemics. A pandemic is a war between humans and a disease. Creating conflict between humans might serve the selfish goals of some leaders, but it harms humanity. As such, a key lesson from the COVID-19 pandemic is that using racism and xenophobia will only make things worse. As it always does.