While most Americans initially supported the lockdown, a fraction of the population  engaged in (often armed) protests. While the topic of protests is primarily a matter for political philosophy and ethics, critical thinking applies here as well. Given the political success of the anti-health movement in America, we can expect protests against efforts to mitigate the next pandemic. Assuming that any efforts are made.

While the protests were miniscule in size relative to the population of the country, they attracted media attention—they made the national news regularly and the story was repeated and amplified. On the one hand, this makes sense: armed protests against efforts to protect Americans from the virus was news. On the other hand, media coverage is  disproportional to the size and importance of the protests.  The “mainstream” media is often attacked as having a liberal bias and while that can be debated, it the media does have a bias for stories that attract attention. Public and private news services need stories that draw an audience. Protests, especially by people who are armed, draw an audience.  It can also be argued that some news services have a political agenda that was served by covering such stories.

While it can be argued that such stories are worth covering in the news, disproportional coverage can lead people to commit the Spotlight Fallacy. This fallacy is committed when a person uncritically assumes that the degree of media coverage given to something is proportional to how often it occurs or its importance. It is also committed when it is uncritically assumed that the media coverage of a group is representative of the size or importance of the group.

 

Form 1

Premise 1: X receives extensive coverage in the media.

Conclusion: X occurs in a frequency or is important proportional to its coverage.

 

Form 2

Premise 1: People of type P or Group G receive extensive coverage in the media.

Conclusion: The coverage of P or G is proportional to how P and G represent the general population.

 

This line of reasoning is fallacious since the fact that someone or something attracts the most attention or coverage in the media does not mean that it is representative or that it is frequent or important.

It is like the fallacies Hasty Generalization, Biased Sample and Misleading Vividness because the error being made involves generalizing about a population based on an inadequate or flawed sample.

In the case of the lockdown protests, the protests were limited in occurrence and size, but the extent of media coverage conveyed the opposite. The defense against the Spotlight Fallacy is to look at the relevant statistics. As noted above, while the lockdown protests got a great deal of coverage, they were small events that were not widespread. This is not to say that they have no importance. As such we should look at such protests not through the magnifying glass of the media but through the corrective lenses of statistics. I now turn to an ad hominem attack on the protesters.

Some critics of the protesters pointed out that the protesters were also being manipulated by an astroturfing campaign. Astroturfing is a technique in which the true sponsors of a message or organization create the appearance that the message or organization is the result of grassroots activism. In the case of the lockdown protests, support and organization was being provided by individuals and groups supporting Trump’s re-election and who were more concerned with a return to making money than the safety of the American people.  While such astroturfing is a matter of concern, to reject the claims of the protesters because they are “protesting on AstroTurf” rather than standing on true grassroots would be to commit either an ad hominem or genetic fallacy.

An ad hominem fallacy occurs when a person’s claim is rejected because of some alleged irrelevant defect about the person. In very general terms, the fallacy has this form:

 

Premise 1: Person A makes claim C.

Premise 2: An irrelevant attack is made on A.

Conclusion: C is false.

 

This is a fallacy because attacking a person does not disprove the claim they have made. In the case of a lockdown protester, rejecting their claims because they might be manipulated by astroturfing would be a fallacy. As would rejecting their claims because of something one does not like about them.

If the claims made by the protesters as a group were rejected because of the astroturfing (or other irrelevant reasons) then the genetic fallacy would have been committed. A Genetic Fallacy is bad “reasoning” in which a perceived defect in the origin of a claim or thing is taken to be evidence that discredits the claim or thing itself. Whereas the ad hominem fallacy is literally against the person, the genetic fallacy applies to groups. The group form looks like this:

 

Premise 1: Group A makes claim C.

Premise 2: Group A has some alleged defect.

Conclusion: C is false.

 

While it is important to avoid committing fallacies against the protesters, it is also important to avoid committing fallacies in their favor. Both the ad hominem and genetic fallacy can obviously be committed against those who are critical of the protesters. For example, if someone dismisses the claim that the protesters are putting themselves and others at needless risk by asserting that the critic “hates Trump and freedom”, then they would be committing an ad hominem. The same will apply to future protests about responses to pandemics. Again, assuming there will be a response.

To many Americans the protests seemed not only odd, but dangerously crazy. This leads to the obvious question of why they occurred. While some might be tempted to insult and attack the protesters under the guise of analysis, I will focus on a neutral explanation that is relevant to critical thinking. This analysis should also be useful for thinking about the next pandemic.

One obvious reason for the protests is that the lockdown came with an extremely high price—people had good reason to dislike it and this could have motivated them to protest. But there is more to it than that. The protests were more than people expressing their concerns and worries about the lockdown. They were political statements and thoroughly entangled with other factors that included supporting Trump, anti-vaccination views, anti-abortion views, second-amendment rights and even some white nationalism. This is not to claim that every protester endorsed all the views expressed at the protests. Attending a protest about one thing does not entail that a person supports whatever is said by other protesters. Because people try to exploit protests for their own purpose it is important to distinguish the views held by various protestors to avoid falling into assigning guilt by association. That said, the protests were an expression of a polarized political view and it struck many as odd that people would be protesting basic pandemic precautions.

One driving force behind this was what I have been calling the Two Sides Problem. While there are many manifestations of this problem, the idea is that when there are two polarized sides, this provides fuel and accelerant to rhetoric and fallacies—thus making them more likely to occur. Another aspect of having two sides is that it is much easier to exploit and manipulate people by appealing to their membership in one group and their opposition to another.

In the case of the protests, there was a weaponization of public health. Those who recommend the lockdown are expert and there is a anti-expert bias in the United States. The weaponization of the crisis to help the political right followed the usual tactics: disinformation about the crisis, claims of hoaxes, scapegoating, anti-expert rhetoric, conspiracy theories and such. Part of what drove this was in-group bias: the cognitive bias that inclines people to assign positive qualities to their own group while assigning negative qualities to others. This also applies to accepting or rejecting claims.

This weaponization was not new or unique to the COVID-19 pandemic. American politics has been marked by politicizing and weaponizing so that one side can claim a short-term advantage at the cost of long-term harm. Critical thinking requires us to be aware of this and to be honest about the cost of allowing this to be a standard tool of politics.

While there were many aspects to the lockdown protests, one of the core justifications was that the lockdown was a violation of Constitutional rights. The constitutional aspect is a matter of law, and I leave that to experts in law to debate. There is also the ethical aspect—whether the lockdown is morally acceptable, and this issue can be cast in terms of moral rights.  This discussion would take us far afield into the realm of moral philosophy, but I will close with an analogy that might be worth considering.

While the protesters were against the lockdown in general, opposition to wearing masks was the focus of the complaints. While there was rational debate about the efficacy of masks, the moral argument advanced was that the state does not have the right to compel people to wear masks. It can also be presented in terms of people having rights that the state must respect. One possibility is that people have the right to decide what parts of the body they wish to cover. If so, the obvious analogical argument is that if this right entitles people to go without masks, it also entitles people to go without any clothes they choose not to wear. If imposing masks is an act of oppression, then so is imposing clothing in general.

Another possible right is the right to endanger others or at least freely expose other people to bodily “ejections” they do not wish to encounter. If there is such a right, then it could be argued that people have a right to fire their guns and drive as they wish, even if doing so is likely to harm or kill others. If there is a right to expose other people to physical bodily ejections that they do not want to be exposed to, then this would entail that people have the right to spit and urinate on other people. This all seems absurd.

As a practical matter, people are incredibly inconsistent when it comes to rights and restrictions, so I would expect some people to simply dismiss these analogies because they did not want to wear masks but probably do not want people running around naked. But if masks were an act of oppression, so are clothes.

When the next pandemic arrives, we can expect similar protests against efforts to combat it. But this assumes that efforts will be taken, which will depend on who is running America during the next pandemic.

One stock argument against social distancing and other restricted responses to the COVID-19  pandemic was to conclude  these measures should not have been taken because we do not take similar approaches to comparable causes of death. In the next pandemic, we can expect the same reasoning which can be formalized as follows:

 

Premise 1: Another cause of death kills as many or more people.

Premise 2: We do not impose X measures to address this cause of death.

Conclusion: We should not impose X measures to address the pandemic.

 

Those making the argument often used the flu as an analogous cause of death, but there were also comparisons to automobile accidents, suicides, heart disease, drowning in pools and so on. While the specific arguments were presented in various ways, they were all arguments from analogy. Or at least attempts.

Informally, an argument by analogy is an argument in which it is concluded that because two things are alike in certain ways, they are alike in some other way. More formally, the argument looks like this:

 

            Premise 1: X and Y have properties P, Q, R.

            Premise 2: X has property Z.

            Conclusion: Y has property Z.

 

X and Y are variables that stand for whatever is being compared, such as causes of death. P, Q, R, and are also variables, but they stand for properties or features that X and Y are known to possess, such as killing people. Z is also a variable, and it stands for the property or feature that X is known to possess, such as not being addressed with social distancing. The use of P, Q, and R is just for the sake of the illustration—the things being compared might have many more properties in common.

An argument by analogy is an inductive argument. This means that it is supposed to be such that if all the premises are true, then the conclusion is probably true. Like other inductive arguments, the argument by analogy is assessed by applying standards to determine the quality of the logic. Like all arguments, there is also the question of whether the premises are true. The strength of an analogical argument depends on three factors. To the extent that an analogical argument meets these standards it is a strong argument.

First, the more properties X and Y have in common, the better the argument. This standard is based on the commonsense notion that the more two things are alike in other ways, the more likely it is that they will be alike in some other way. It should be noted that even if the two things are alike in many respects, they might not be alike in terms of property Z. This is one reason why analogical arguments are inductive.

Second, the more relevant the shared properties are to property Z, the stronger the argument. A specific property, for example P, is relevant to property Z if the presence or absence of P affects the likelihood that Z will be present. It should be kept in mind that it is possible for X and Y to share relevant properties while Y does not actually have property Z. Again, this is part of the reason why analogical arguments are inductive.

Third, it must be determined whether X and Y have relevant dissimilarities. The more dissimilarities and the more relevant they are, the weaker the argument.

These can be simplified to a basic standard: the more like the two things are in relevant ways, the stronger the argument. And the more the two things are different in relevant ways, the weaker the argument. So, using these standards let us consider the cause of death analogy.

One thing that all causes of death do have in common is that they are causes of death. This is true of everything from swimming pools to the flu to COVID-19. Obviously, different causes of death will be more or less like COVID-19 or other pandemic caused deaths and a full consideration would require grinding through each argument to see if it holds up. In the interest of time, I will consider two main categories of causes of death that should encompass most (if not all) causes.

One category of causes of death consists of those that cannot be addressed by social distancing and the other science-based approaches to COVID-19 and other likely pandemic pathogens. These include such things as suicide, traffic fatalities and swimming pool deaths.  We obviously do not use social distancing to address these causes of death because they would not work. As such, arguing that because we do not use social distancing to combat traffic deaths so we should not use it to combat COVID-19 (or another pathogen) would be a terrible analogy. To use an analogy, this would be like arguing that since we do not use air bags and seat belts to address pandemics, we should not use them to reduce traffic fatalities. This would be bad reasoning. 

To be fair, someone could argue that what matters is not the specific responses but the degree of the response. That is, since we do not have a massive and restrictive response to traffic fatalities, we should not have had a massive response to COVID-19 or have a similar response to the next pandemic. While it is rational to make a response proportional to the threat, the obvious reply to this argument is that we do have a massive and restrictive response to traffic fatalities. Vehicles must meet safety standards, drivers must be licensed, there are books of traffic laws, traffic is strictly regulated with signs, lights and road markings, and the police patrol the roads regularly. Even swimming pools are heavily regulated in the United States. For example, fences and self-locking gates are mandatory in most places. Somewhat ironically, drawing an analogy to things like traffic fatalities supports massive and restrictive means of addressing a pandemic.

Horribly, the best (worst) way to argue against a strong response to a pandemic would be to find a cause of death on the scale of the pandemic that we as a nation do little about and then argue that the same neglect should be applied to the pandemic. Poverty and lack of health care are two examples.  This analogy would certainly appeal to evil people.

The second category of deaths consists of causes that could be addressed using the same methods used to address a pandemic. The common flu serves as an excellent example here. The same methods that work against COVID-19 and many other pathogens also work against the flu. As many argued, even in a bad flu year life remains normal: no social distancing, no closing of businesses, no mandatory masks. While this analogy seems appealing, it falls apart quickly because of the relevant differences between COVID-19 and the common flu. The same would also apply to the next dangerous pandemic.

When people started advancing death analogies against pandemic responses, the best available estimate about COVID-19 was that it killed 3-4% of those infected—though there was considerable variation based on such factors as age, access to health care, and underlying health conditions.  In contrast, the flu has a mortality rate well below .1%. It kills too many people but was less dangerous than COVID-19. So, it makes sense to have a more restrictive and extensive response to something that is more dangerous. We also have(or had) measures in place against the flu: people are urged to take precautions and flu shots are recommended. As this is being written, people feel that the threat of COVID is like that of the seasonal flu and acting accordingly. Likewise, if a dangerous strain of flu emerges again, then it would make sense to step up our restrictions.

In closing, this discussion does lead to a matter of ethics and public policy. As those who make the death analogies note, we collectively tolerate a certain number of preventable deaths. We must seriously address the issue of determining the acceptable number of deaths from the next pandemic and match our response to that judgment. And we should not forget that we might be among those tolerated deaths. We should also consider why we do tolerate so many other deaths.

Stay safe and I will see you in the future.

During the COVID-19 pandemic some public figures and social media users attempted to downplay the danger of COVID-19 by comparing the number of deaths caused by the virus to other causes of deaths.  For example, a common example noted that 21,297 people died from 1/2/202 to 3/25/200 from COVID-19 but that 113,000 people died from the flu during the same period.

Downplaying is a rhetorical technique used to make something seem less important or serious. These comparisons seemed aimed at dismissing claims made by experts that the virus was a serious threat. The comparisons were also often used to persuade people that the response was excessive and unnecessary. While comparing causes of death is useful when judging how to use resources and accurately assess threats, the death comparisons must be done with a critical eye. That was true in the last pandemic, and it will be true in the next one.

Before even considering the comparison between pandemic deaths and other causes of death, it is important to determine the accuracy of the numbers. If the numbers are exaggerated, downplayed or otherwise inaccurate, then this undermines the comparison. Even if the numbers are accurate, the comparison must be critically assessed. The methods I will discuss are those I use in my Critical Inquiry class and are drawn from Moore and Parker’s Critical Thinking text. When an important comparison is made, you should ask four questions:

 

  1. Is important information missing?
  2. Is the same standard of comparison being used? Are the same reporting and recording practices being used?
  3. Are the items comparable?
  4. Is the comparison expressed as an average?

 

While question 4 does not apply, the other three do. One important piece of missing information in such comparisons is that while the other causes of death tend to be stable over time, the deaths caused by COVID-19 grew exponentially. On March 1 the WHO reported 53 deaths that day. 862 deaths were reported on March 16. On March 30 there were 3215 new deaths. On April 8 the United States alone  had 1,997 deaths and 14.390 people are believed to have died in the United States since the start of the pandemic. The death toll kept rising. In contrast, while seasonal flu deaths fluctuate, they do not grow in this exponential manner. As such, the comparison is flawed. We can expect similar comparisons to be made in the next pandemic and should be on guard against erroneous comparisons of this sort.

Another flaw in the comparison is that the flu and many other causes of death are well established. The COVID-19 virus was still spreading when the comparison was made. It would be like comparing a fire that just started with a fire that has been steadily burning and confidently claiming that the new fire would not be as bad as the old fire.

Death numbers are also most likely an estimate from past yearly death tolls. What the numbers reflect is the number of people who probably died of those causes during a few months based on data from previous years. 

While the death toll from COVID-19 was high, COVID-19 deaths were also likely to be underreported. Since testing was limited for quite some time, some people who died from the virus did not have their cause of death properly reported. Even in the early days of the death comparison, the deaths caused by COVID-19 were most likely higher than reported. This leads to two problems with the comparison. One is that if the other causes of death are accurately reported and COVID-19 deaths were not, then the comparison is flawed. The second is that COVID-19 deaths might have been recorded as being caused by something else (such as the flu/pneumonia) and this would also make the comparison less accurate by “increasing” the number of deaths by other causes. 

While the comparison to other causes of death might have seemed persuasive early in the pandemic, the exponential increase in deaths is like to have robbed the comparison of its persuasive power. In mid-April, COVID-19 was killing more Americans per week than automobile accidents, cancer, heart disease and the flu/pneumonia did in 2018.  Somewhat ironically, a comparison of COVID-19 deaths ended up showing the reverse of what the comparison was originally intended to do.

We can expect similar death comparisons in the early days of the next pandemic. While these comparisons can have merit, they are often used as rhetorical devices to downplay the seriousness of a pandemic. As such, we should be on guard against this tactic during the next pandemic.

In the last essay I looked at the inductive generalization and its usefulness in reasoning about pandemics and ended by mentioning that there are various fallacies that can occur when generalizing. The most common are hasty generalization, appeal to anecdotal evidence, and biased generalization. I will look at each of them in terms of pandemics.

A hasty generalization occurs when a person draws a conclusion about a population based on a sample that is not large enough to adequately support the concussion. It has the following form:

 

Premise 1: Sample S (which is too small) is taken from population P.

Premise 2: In Sample S X% of the observed A’s are B’s.

Conclusion: X% of all A’s are B’s in Population P.

 

In the previous essay I presented a rough guide to sample size,  margin of error and confidence level. In that context, this fallacy occurs when the sample is not large enough to warrant confidence in the conclusion. In the case of a pandemic, one important generalization involves sorting out the lethality of the pathogen. Math for this is easy but the challenge is getting the right information.

During the COVID-19 pandemic, there were large samples of infected people.  As such, inferences from these large samples to the lethality of the virus would not be a hasty generalization. But avoiding this fallacy does not mean that the generalization is a good one as there are other things that can go wrong.

There were also inferences drawn from relatively small samples, such as generalizations from treatments undergoing testing. For example, the initial samples of people treated with hydroxychloroquine for COVID-19 were small, so generalizing from those samples risked committing a hasty generalization. This isn’t to say that small samples are always useless but due care must be taken when generalizing from them.

As a practical guide, when you hear claims about a pandemic (or anything) based on generalizations you need to consider whether the conclusion is supported by an adequately sized sample. While having a sample that is too small does not entail that the conclusion is false (that inference would be fallacious) but the conclusion would not be adequately supported. While a small sample provides weak logical support, an anecdote can have considerable psychological force which leads to a fallacy similar to the hasty generalization.

An appeal to anecdotal evidence is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or very few cases. The fallacy is also committed when someone rejects reasonable statistical data supporting a claim in favor of a single example or small number of examples that go against the claim. There are two forms for this fallacy:

 

Form One

Premise 1:  Anecdote A is told about a member M (or small number of members) of Population P.

Premise 2: Anecdote A says that M is (or is not) C.

Conclusion: Therefore, C is (or is not) true of Population P.

 

Form Two

Premise 1:  Good statistical evidence exists for general claim C.

Premise 2: Anecdote A is is an exception to or goes against general claim C.

Conclusion: C is false.

 

This fallacy is like hasty generalization in that an inference is drawn from a sample too small to adequately support the conclusion. One difference between hasty generalization and anecdotal evidence is that the fallacy of anecdotal evidence involves using a story (anecdote) as the sample. Out in the wild it can be difficult to distinguish as hasty generalization from anecdotal evidence. Fortunately, what is most important is recognizing that a fallacy is occurring. A much clearer difference is that the paradigm form of anecdotal evidence involves rejecting statistical data in favor of the anecdote.

People often fall victim to this fallacy because anecdotes usually have much more psychological force than statistical data. Wanting an anecdote to be true also fuels this fallacy. During the COVID-19 pandemic, there were many anecdotes about alleged means of curing or preventing or curing the disease and the same will happen during the next pandemic. Even if the anecdotes are not lies, they do not provide an adequate basis for drawing conclusions about the general population. This is because the sample is not large enough to warrant the conclusion. As a concrete example, while there were some early positive anecdotes about hydroxychloroquine. Then wishful thinking and Trump’s claims caused some people to accept the anecdotes as adequate evidence, but this was bad reasoning.

Appeals to anecdotal evidence often occur in the context of causal reasoning, such as the case of hydroxychloroquine, and this adds additional complexities.

As with any fallacy, it does not follow that the conclusion of an appeal to anecdotal evidence is false. The error is accepting the conclusion based on inadequate evidence, not in making a false claim. It is also worth noting that anecdotal evidence can be useful for possible additional investigation but is not enough to prove a general claim.

As noted earlier, there were large samples of infected people that allowed generalizations to be drawn without committing the fallacy of hasty generalization. But even large samples can be problematic. This is because samples need to be both large enough and representative enough. This takes us to the fallacy of biased generalization.

This fallacy is committed when a person draws a conclusion about a population based on a sample that is biased to a degree or in a way that prevents it from adequately supporting the conclusion.

 

Premise 1: Sample S (which is too biased) is taken from population P.

Premise 2: In Sample S X% of the observed A’s are B’s.

Conclusion: X% of all A’s are B’s in Population P.

 

The problem with a biased sample is that it does not represent the population adequately and so does not adequately support the conclusion. This is because a biased sample can differ in relevant ways from the population that affects the percentages of A’s that are B’s.

In the case of COVID-19 there was a serious problem with biased samples, although the situation improved over time. I will focus on an inductive generalization about the lethality of the virus.

The math for calculating lethality is easy but the main challenge is sorting out how many people are infected.   Since the start of the pandemic, the United States had a self-inflicted shortage of test kits. Because of this, many of the available tests were being used on people showing symptoms or who were exposed to those known to be infected. This sample was sample was large but biased: it contained a disproportionate number of people who were already showing symptoms and missed many people who were infected but asymptomatic.

If we face a similar situation in the next pandemic, the sample will probably have a lethality rate higher than the real lethality rate. To use a simple fictional example: imagine a population of 1000 people and 200 of them are infected with a virus. Of the 200 people infected, 20 show symptoms and only they are tested.  Of the 20 people tested, 2 die. This sample would show a mortality rate of 10%. But the actual mortality rate would be 2 in 200 which is 1 %.  This would still be bad, but not as bad as the biased sample would indicate. This shows one of the many reasons why broad testing is important: it is critical to establish an accurate lethality rate. An accurate lethality rate is essential to making rational decisions about our response to any pandemic.

As a final point, it is also important to remember that the lethality varies between groups in the overall population—we know this based on the death data. But to determine the lethality for each group, the samples used for the calculation must be representative of the population. While overall lethality is important, making rational decision making also requires knowing the lethality for various groups. For example, pathogens tend to be more lethal for seniors and they would need more protection in the next pandemic.

As always, stay safe and I will see you in the future.

During a pandemic, like that of COVID-19, it might be wondered how the number of cases is determined and how the lethality of a disease is determined. Some might be concerned or skeptical because the numbers often change over time and they usually vary across countries, age groups, ethnicities and economic classes. This essay provides a basic overview of a core method of making inferences from samples to entire populations, what philosophers call the inductive generalization.

An inductive generalization is an inductive argument. In philosophy, an argument consists of premises and one conclusion. The premises are the reasons or evidence being offered to support the conclusion, which is the claim being argued for.  Philosophers often divide arguments into inductive and deductive. In philosophy a deductive argument is such that the premises provide (or are supposed to provide) complete support for the conclusion. An inductive argument is an argument such that the premises provide (or are supposed to provide) some degree of support (but less than complete support) for the conclusion.  If the premises of an inductive argument support the conclusion adequately (or better) it is a strong argument. It is such that if the premises are true, the conclusion is likely to be true. If a strong inductive argument has all true premises, it is sometimes referred to as being cogent.

One feature of inductive logic is that a strong inductive argument can have a false conclusion even when all the premises are true. This is because of what is known as the inductive leap: the conclusion always goes beyond the premises. This can also be put in terms of drawing a conclusion from what has been observed to what has not been observed. The now dead David Hume argued back in the 1700s that this meant we could never be sure about inductive reasoning and later philosophers called this the problem of induction. In practical terms, this means that even if we use perfect inductive reasoning using premises that are certain, our conclusion can still be false. But induction is often the only option, and we use it because we must. So, when the initial numbers about COVID-19 turned out to be wrong, this is exactly what we should expect. The same must be expected in the next pandemic.

What, then, is an inductive generalization? Roughly put, it is an argument in which a conclusion about an entire population is based on evidence from a sample of observed members of that population. The formal version looks like this:

                                    Premise 1: P% of observed Xs are Ys.

                                    Conclusion: P% of all Xs are Ys.

 

The observed Xs would be the sample and all the Xs would be the target population. As an example, if someone wanted to know the mortality rate during a pandemic for males over sixty, the target population would be all males over sixty.

While the argument is simple, sorting out when a generalization is strong can be challenging. Without getting into the statistics and methods for doing rigorous generalizations, I will go over the basic method of assessment—so you can make some sense when experts talk about such matters during the next pandemic.

There will be various factors whose presence or absence in the sample can affect the presence or absence of the property the argument is concerned with, so a representative sample will have those factors in proportion to the target population. For example, if we wanted to determine the infection rate for all people, then we would need to try to ensure that our sample included all factors affecting the infection rate and our sample would need to mirror our target population in terms of age, ethnicity, base health, and all other relevant features. Sorting out what factors are relevant can be challenging, especially as a pandemic is unfolding. To the degree that the sample mirrors the target population properly, it would be representative.

A sample is biased relative to a factor to the extent that the factor is not present in the sample in the same proportion as in the population. This sort of sample bias was a problem when trying to generalize about COVID-19. One example of this was trying to draw a conclusion about the lethality of COVID-19. While the math to do this is easy (a simple calculation of the percentage of the infected who die from it) getting the numbers right is hard because we needed to know how people were infected and how many died from it.

Experts tried to determine the number of people infected by testing and modeling, which are also inductive reasoning. In the United States, most of the testing was of people showing symptoms, and this created a biased sample—to get an unbiased sample, even those without symptoms should be tested.  There was also the practical matter of the accuracy of the tests and the determination of the cause of death. This will be true in the next pandemic as well.

To use a concrete, but made up example, if 5% of those who tested positive for COVID-19 ended up dying, the generalization from that sample to the whole population would only be as strong as the representativeness of the sample. If only sick people were tested, the sample will not be representative and the conclusion about the lethality of the virus will (probably) be wrong.

There is also the challenge of sorting out the effect of the virus on different populations. While there is an overall infection rate and lethality rate for the whole population, there are different infection rates and lethality rates for different groups within the human population. As an example, the elderly were more likely to die of COVID-19 than younger people.

In addition to representativeness, sample size is important; the larger, the better.  This brings us to two more concepts: Margin of error and confidence level. A margin of error is a range of percentage points within which the conclusion of inductive generalization falls; this number is usually presented in terms of being plus or minus. The margin of error depends on sample size and the confidence level of the argument. The confidence level is typically presented as a number and represents the percentage of arguments like the one in question that have a true conclusion.

When generalizing about large (1o,000+) populations, a sample will need to have 1,000+ individuals to be representative (assuming the sample is taken properly). This table, from Moore & Parker’s Critical Thinking text, shows the connection between sample size and error margin (confidence level of 95%:

 

Sample Size

Error Margin (%)

Corresponding Range (percentage points)
10 +/- 30 60
25 +/- 22 44
50 +/- 14

28

100 +/- 10 20
250 +/- 06 12
500 +/- 04 8
1,000 +/- 03 6
1,500 +/- 02 4

 

The practical takeaway is that sample size is important: a small sample will have a large margin of error that can make it useless. For example, suppose that a group of 50 COVID-19 patients received hydroxychloroquine tablets and 10 of them recovered fully. Laying aside all causal reasoning (which would be a huge mistake) the best we could say is that 20% of patients treated with hydroxychloroquine +/-14% will recover fully. This is just a simple generalization and a controlled experiment or study would be needed to properly assess a causal claim.

There are various fallacies (mistakes in reasoning) that can occur with a generalization. I will discuss those in the next essay. Stay safe and I will see you in the future.

During a White House press briefing President Trump expressed interest in injecting disinfectants as a treatment for COVID-19. In response  medical experts and the manufacturers of Lysol warned the public against attempting this. Trump’s defenders adopted two main strategies. The first was to interpret Trump’s statements in a favorable way; the second was to assert they were “fact checking” the claim that Trump told people to inject disinfectant. Trump eventually claimed that he was being sarcastic to see what the reporters would do. From the standpoint of critical thinking, there is lot going on with rhetorical devices and fallacies. I will discuss how critical thinking can sort through this sort of situation, because the next pandemic is likely to see a repeat performance.

When interpreting or reconstructing claims and arguments, philosophers are supposed to apply the principle of charity. Following this principle requires interpreting claims in the best possible light and reconstructing arguments to make them as strong as possible. There are three reasons to follow the principle. The first is that doing so is ethical. The second is that doing so avoids committing the straw person fallacy, which I will talk more about in a bit. The third is that if I am going to criticize a person’s claims or arguments, criticism of the best and strongest versions also takes care of the lesser versions.

The principle of charity must be tempered by the principle of plausibility: claims must be interpreted, and arguments reconstructed in a way that matches what is known about the source and the context. For example, reading quantum physics into the works of our good dead friend Plato would violate this principle.

Getting back to injecting disinfectants, it is important to accurately present Trump’s statements in context and to avoid making a straw person. The Straw Person fallacy is committed when one ignores a person’s actual claim or argument and substitutes a distorted, exaggerated or misrepresented version of it. This sort of “reasoning” has the following pattern:

 

Premise 1: Person A makes claim or argument X.

Premise 2: Person B presents Y (which is a distorted version of X).

Premise 3: Person B attacks Y.

Conclusion: Therefore, X is false/incorrect/flawed.

 

This sort of “reasoning” is fallacious because attacking a distorted version of a claim or argument is not a criticism of the original. This fallacy often uses hyperbole, a rhetorical device in which one makes an exaggerated claim. A straw serson can be effective because people often do not know the real claim or argument being attacked. The fallacy is especially effective when the straw person matches the audience’s biases or stereotypes as they will feel that the distorted version is the real version.

While this fallacy is usually aimed at an audience, it can be self-inflicted: a person can unwittingly make a Straw Person out of a claim or argument. This can be done entirely in error (perhaps due to ignorance) or due to the influence of prejudices and biases.

The defense against a Straw Man, self-inflicted or not, is to get a person’s claim or argument right and to apply the principle of charity and the principle of plausibility.

Some of Trump’s defenders claimed Trump was the victim of a straw person attack; they set off on a journey of “fact checking” and asserted that Trump did not tell people to drink bleach. Somewhat ironically, they might have engaged in Straw Person attacks when attempting to defend Trump from alleged Straw Person attacks. Warning people to not drink bleach or inject disinfectants is not the same thing as claiming that Trump told people to do these things.

The truth is that Trump did not tell people to drink bleach. His exact words, from the official White House transcript, are: “And then I see the disinfectant, where it knocks it out in a minute. One minute. And is there a way we can do something like that, by injection inside or almost a cleaning. Because you see it gets in the lungs and it does a tremendous number on the lungs. So it would be interesting to check that. So, that, you’re going to have to use medical doctors with. But it sounds — it sounds interesting to me.”

Trump did not tell people to drink bleach or inject disinfectant. As such, the “Clorox Chewables” and similar memes were a form of visual Straw Person attack against Trump. But they can also be seen as the rhetorical device of mockery. To avoid committing the Straw Person fallacy, we need to use Trump’s actual statements and attacking him for advocating drinking bleach would be an error.

While Trump does not directly tell people inject disinfectants, he can be seen as engaging in a form of innuendo, a rhetorical technique in which something is suggested or implied without directly saying it. Anyone who understands the basics of how language and influence works would get that Trump’s remarks would cause some people to believe that this was something worth considering. There is evidence for this in the form of calls to New York City poison control centers and similar calls in Maryland and other states. A feature of innuendo is that it allows a person to deny they said what they implied or suggested, after all, they did not directly say it. Holding someone accountable requires having adequate evidence that they intended what their words implied or suggested. Doing this can be challenging since it requires insights into their character and motives. There is also a moral issue here about the responsibility of influential people to take care in what they say, something that goes beyond critical thinking and into ethics. But a president needs to be careful in what they say.

Trump used words indicating he thought medical doctors should test injecting disinfectants into peoples’ lungs as a possible treatment for COVID-19. Given Trump’s well-established record of dangerous ignorance, interpreting his words as meaning what they state does meet the conditions of the principle of charity and the principle of plausibility: these are his exact words, in context and with full consideration of the source.

Some of Trump’s defenders also tried to use what could be called the Steel Person fallacy. The Steel Person fallacy involves ignoring a person’s claim or argument and substituting a better one in its place.  This sort of “reasoning” has the following pattern:

 

Premise 1: Person A makes claim or argument X.

Premise 2: Person B presents Y (a better version of X).

Premise 3: Person B defends Y.

Conclusion: Therefore, X is true/correct/good.

 

This sort of “reasoning” is fallacious because presenting and defending a better version of a claim or argument does not show the original is good. A Steel Person can be effective because people often do not know the real claim or argument being defended. The fallacy is especially effective when the Steel Person matches the audience’s positive biases or stereotypes, they will feel the improved version is the real version and accept it. The difference between applying the principle of charity and committing a Steel Person fallacy lies in the intention: the principle of charity is aimed at being fair, the Steel Person fallacy is aimed at making a person’s claim or argument appear much better and so is an attempt at deceit.

While this fallacy is generally aimed at an audience, it can also be self-inflicted: a person can unwittingly make a Steel Person out of a claim or argument. This can be done in error (perhaps due to ignorance) or due to the influence of positive biases. The defense against a Steel Man, self-inflicted or not, is to take care to get a person’s claim or argument right and to apply the principle of plausibility.

In the case of Trump, he was clearly expressing interest in injecting disinfectants into the human body. Some of his defenders created a Steel Man version of his claims, contending that what he really was doing was presenting new information about using light, heat and disinfectant killing the virus. To conclude that Trump was right because of this better version would be an error in logic. While light, heat and disinfectant will destroy the virus, Trump’s claim was about injecting disinfectant into the human body—which, while not telling people to drink bleach, is a dangerously wrong claim.

Trump himself undermined these defenders by saying “I was asking a question sarcastically to reporters like you just to see what would happen.” If this is true, then his defenders’ claims that he was not talking about injecting disinfectant would be false. He cannot have been saying something dangerously crazy to troll the press and making a true and rational claim about cleaning surfaces.  Trump seems to have  used a rhetorical device popular with the right (although anyone can use it). This method could be called the “just kidding” technique and can be put in the meme terms “for the lulz.”

One version of the “just kidding” tactic occurs when a person says something that is racist, bigoted, sexist, or otherwise awful and does not get the positive response they expected. The person’s “defense” is that they did not really mean what they said, they were “just kidding.” As a rhetorical technique, it is an evasive maneuver designed to avoid accountability. The defense against this tactic is to assess whether the person was plausibly kidding. Did they intend to be funny without malicious intent and fail or are they trying to weasel out of accountability? This can be difficult to sort out, since you need to have some insight into the person’s motives, character and so on.

Another version of the “just kidding” tactic is similar to the “I meant to do that” tactic. When someone does something embarrassing or stupid, they will often try to reduce humiliation by claiming they intended to do it. In Trump’s injection case, he claimed he intended to say what he said and that he was being sarcastic. He meant to do it but was just kidding.

If Trump was just kidding, he thought it was a good idea to troll the media during a pandemic—which is a matter for ethics rather than critical thinking. If he was not kidding, then he was attempting to avoid accountability for his claim, which is the point of this tactic. The defense against this tactic is to assess whether the person was plausibly kidding, that is, did they really mean to do it and what they meant to do was just kidding? This requires having some insight into the person’s character and motives as well as considering the context. In the case of Trump, the video shows him addressing his remarks to the experts rather than the press and he seems completely serious. There is also the fact that a president engaged in a briefing on a pandemic should be serious rather than sarcastic. As such, he does not seem to be kidding. But Trump put himself in a dilemma of awfulness: he was either seriously suggesting a dangerous idea or trying to troll the press during a briefing on a pandemic that was killing thousands of Americans. Either way, that would be terrible. While this essay focuses on Trump and the COVID-19 pandemic; we should expect something similar, if not worse, should another pandemic arise during Trump’s reign.

During the COVID-19 pandemic, some politicians argued that America should be reopened because the dire predictions about COVID turned out to be wrong. On the face of it, this appears to be good reasoning that we could use in the next pandemic: if things are not as bad as predicted, we can start reopening sooner than predicted. To use an analogy, if a fire was predicted to destroy your house, but it only burned your garage, then it would make sense to move back in and rebuild the garage. While this reasoning is appealing, it also can be a trap. Here is how the trap works.

Some politicians and pundits pointed out that the dire predictions about COVID did not come true. For example, the governor of Florida said that since the hospitals were not overwhelmed as predicted, it was a good idea for them to return to profitable elective surgery. He also, like some other Republican governors, wanted to reopen very quickly. This reasoning seemed sensible: the pandemic was not as bad as predicted, so we can quickly reopen. There are also those who sneered at the dire predictions and were upset at what they saw as excessive precautions. This can also seem sensible: the experts predicted a terrible outcome for COVID-19, but they were wrong. We overreacted and should have rolled back the precautions when the predictions did not come true. But would this be a wise strategy for the next pandemic?

While it is reasonable to consider whether the precautions were excessive, there is a tempting fallacy that needs to be avoided. This is the prediction fallacy. It occurs when someone uncritically rejects a prediction and responses to that prediction when the outcome of a prediction turns out to be false. The error in the logic occurs because the person fails to consider what should be obvious: if a prediction is responded to effectively, then the prediction is going to be “wrong.” The form of the fallacy is this:

 

Premise 1: Prediction P predicted X if we do not do R.

Premise 2: Response R was taken based on prediction P.

Premise 3: X did not happen, so prediction P is wrong.

Conclusion: We should not have taken Response R (or no longer need to take Response R).

 

To use a concrete example:

 

Premise 1: Experts predicted that the hospitals in Florida would be overwhelmed if we did not respond with social distancing and other precautions.

Premise 2: People responded to this prediction with social distancing and other precautions.

Premise 3: The hospitals in Florida were not overwhelmed, so the prediction was wrong.

Conclusion: The response was excessive, and we no longer need these precautions.

 

 

While it is (obviously) true that a prediction that turns out to be wrong is wrong, the error is uncritically concluding that this proves that the response based on the prediction need not have been taken (or that we no longer need to keep responding in this way). The prediction assumes we do not respond (or do not respond a certain way) and the response is to address the prediction. If the response is effective, then the predicted outcome would not occur and that is the point of responding. To reason that the “failure” of the prediction shows that the response was mistaken or no longer needed would be a mistake in reasoning. You could be right; but you need to do more than to point to the failed prediction.

As a silly, but effective analogy, imagine we are driving towards a cliff. You make the prediction that if we keep going, we will go off the cliff and die. So, I turn the wheel and avoid the cliff. If backseat Billy gets angry and says that there was no reason to turn the wheel or that I should turn it back because we did not die in a fiery explosion, Billy is falling for this fallacy. After all, if we did not turn, then we would have died. And if we turn back too soon, then we die.

The same applied to COVID-19: by responding effectively to dire predictions, we changed the outcome and the predictions turned out to be wrong. But to infer that the responses were excessive or that we should stop now simply because the results were not as dire as predicted would be an error.

This is not to deny what is obviously true: it is possible to overreact to a pandemic. But making decisions based on the prediction fallacy is a bad idea. There is also another version of this fallacy.

A variation of this fallacy involves inferring the prediction was a bad one because it turned out to be wrong:

 

Premise 1: Prediction P predicted X if we do not do R.

Premise 2: Response R was taken based on prediction P.

Premise 3: X did not happen.

Conclusion: The prediction was wrong about X occurring if we did not do R.

 

While the prediction turned out to be wrong in the sense that the predicted outcome did not occur, this does not disprove the prediction that X would occur without the response when the response occurred. Going back to the car analogy, the prediction that we would die if we drove off the cliff if we do not turn is not disproven if we turn and then do not die. In fact, that is the result we want.

Getting back to COVID-19, the predictions made about what could occur if we did nothing are not disproven by the fact that they did not come true when we did something. So, to infer that these predictions must have been wrong in predicting what would occur if we did nothing would be an error. We do, of course, rationally assess predictions based on outcomes but this assessment should not ignore considering the effect of the response. Sorting out such counterfactual predictions is hard. In complex cases we can probably never prove what would have happened, but good methods can guide us here, which is why we need to go with science and math rather than hunches and feelings.

This fallacy draws considerable force from psychological factors, especially in the case of COVID-19. The response that was taken to the virus came with a high cost and we wanted things to get back to normal—so, ironically, the success of the response made us feel that we could stop quickly or that we did not need such a response.  As always, bad reasoning can lead to bad consequences and in a pandemic, it can hurt and even kill many people.

Stay safe and I will see you in the future.

Years ago, my coverage of medical testing in my critical thinking class was purely theoretical for most of my students. But COVID-19 changed that. One common type of medical test determines whether a person is infected with a disease, such as COVID-19. Another is to determine whether a person has had the infection. While tests are a critical source of information, we need to be aware of the limitations of testing. Since I am not a medical expert, I will not comment on the accuracy of specific methods of testing. Instead, I will look at applying critical thinking to testing.

An ideal medical test would always be accurate and never yield false results. Real medical tests have, for various reasons, less than 100% accuracy and a good test will usually fall into the 90-99% range. This means that a test can always be wrong. So how do you rationally assesses test results?

Intuitively, the chance a person was infected (or not) would seem to be the same as the accuracy of the test. For example, if a COVID-19 test has an accuracy of 90%, then if you test negative, then it seems that there is a 90% chance you do not have COVID. Or, if you test positive, then you might think there is a 90% chance you have COVID. While this seems sensible, it is not correct and arises from a common mistake about conditional probabilities.  I will keep math to a minimum because math, as Barbie said, is hard.

So, suppose that I test positive for COVID and the test is 90% accurate. If I think there is a 90% chance, I have COVID, I am probably wrong and here is why. The mistake is failing to recognize that the probability that X given Y is distinct from the probability of Y given X. In the case of the test, testing positive is the effect of COVID and obviously not the cause. As such, a 90% accurate test for COVID does not mean that 90% of those who test positive (effect) will have COVID (cause). It means that 90% of those who have COVID (cause) will test positive (effect). So, if I have COVID, then there is a 90% chance the test will detect it. The wrong way of looking at it would be to think that if I test positive, then there is a 90% chance I had COVID. So, what is the true chance I have COVID if I test positive on a test that is 90% accurate? The answer is that I do not know. But I do know how to do the math to sort it out.

To know my chance of having COVID I would also need to know the percentage of false positives that occur with the test and, very importantly, the base rate of the infection. The base rate of the infection is the frequency of the cause. Using my made-up test and some made-up numbers, here is how the math would go.

Suppose that the 90% accurate test has a 10% false positive rate and 1% of the population in question is infected. For every 1,000 people in the population:

 

  • 10 people will have COVID
  • 9 of the people with COVID will test positive.
  • 990 people will not have COVID.
  • 99 of the people without COVID will test positive.

 

While there will be 108 positive test results, only 9 of them will have had COVID. So, a person who tests positive has an 8% of having had COVID, not 90%.  In conditional terms and using these made-up numbers, if I have COVID, then there is a 90% chance I will test positive. But If I test positive, then there is an 8% chance I have COVID.

At this point it might be tempting to think that testing is useless, but that would be a mistake. Testing is useful in gathering data about infection rates. Testing is more likely to be accurate in populations with higher rates of COVID infections, but this is a function of statistics rather than a function of testing. To illustrate this, let us run the example again with one change, which is increasing the rate of infection to 10%. For every 1,000 people in the population:

 

  • 100 people will have COVID
  • 90 of the people with COVID will test positive.
  • 900 people will not have COVID.
  • 90 of the people without COVID will test positive.

 

There will be 180 positive test results and 50% of them will have COVID. So, if I test positive for COVID, then there is a 50% chance I have COVID. Again, this is a matter of statistics as the test accuracy, by hypothesis, has not changed. Because of this, testing groups that we know have higher infection rates will give better statistical results that can be useful—but much of the use will be in terms of additional statistical analysis. NPR  provided an excellent discussion of antibody testing for COVID and they even included a calculator that will do the math for you.

In terms of putting your trust in a test, such as an antibody test to determine whether you had COVID or not, it is wise to keep the math in mind. Even if surviving COVID confers some immunity, a positive test might mean an 8% chance that you had COVID. And until we know the rate of an infection, we would essentially be guessing when doing the math. During the pandemic the rational approach might seem odd: you should have assumed you have COVID while also assuming that you have not had COVID. The same will apply to the next pandemic.

While it would be irrational to reject medical claims of health care experts in favor of those made by a ruler, this happened in the last pandemic and will happen again. Why people do this is mainly a matter of psychology, but the likely errors in reasoning are a matter of philosophy.

While those who accept a ruler as a medical authority are falling victim to a fallacious appeal to authority, it is worth considering the specific version of the fallacy being committed. I am calling this fallacy the argument from authoritarian. The error occurs when a person believes a claim simply because it is made by the authoritarian leader they accept. It has this form:

 

Premise 1: Authoritarian leader L makes claim c.

Conclusion: Claim C is true.

 

The fact that an authoritarian leader makes a claim does not provide evidence that supports the claim. It also does not disprove the claim. Accepting or rejecting a claim because it comes from an authoritarian would both be errors. The authoritarian could be right but, as with any logical fallacy, the error lies in the reasoning.

The use of my usual silly math example illustrates why this is bad logic:

 

Premise 1: The dear leader claims that 2+2 =7.

Conclusion: The dear leader is right.

 

At this point, you might be thinking about the consequences someone might suffer from doubting what an authoritarian leader claims. They could be fired, exiled, tortured, or even killed. While that is true, there is a critical distinction between having a rational reason to believe a claim is true and having a pragmatic reason to accept a claim or at least pretend to do so. Fear of retaliation by an authoritarian can provide a practical reason to go along with them but this does not provide evidence. No matter how brutally an authoritarian enforces their view that 2+2=7 and no matter how many people echo his words, 2+2=4. While fear can provide people with motivation to accept an argument from authoritarian, there are other psychological reasons that can drive such bad logic. This takes us to a simplified look at the authoritarian leader type and the authoritarian follower type. The same person can have qualities of both, and everyone has at least some of these traits. The degree to which a person has them is what matters.

An authoritarian leader type is characterized by the belief that they have a special status as a leader. At the extreme, the authoritarian leader believes they are the voice of their followers, and they alone can lead. Or, as Trump put it, “I alone can fix it.” Underlying this is the belief they possess exceptional skills, knowledge and abilities. As Socrates found out, people think they know far more than they do, but the authoritarian leader takes this to extremes and overestimates their abilities. This, as would be expected, leads them to make false claims and mistakes.

Since the authoritarian leader is extremely reluctant to admit their errors and limits, they must be dishonest to the degree they are not delusional and delusional to the degree they are not dishonest.  Because of the need to maintain the lies and delusions about their greatness and success, the authoritarian leader is intolerant of criticism, dissent, and competition. To the extent they can do so, they use coercion against those who would disagree and resort to insults when they cannot intimidate. Because the facts, logic and science would tell against them, they tend to oppose all of these and form many of their beliefs on feelings, biases, and bad logic. They encourage their followers to do the same—in fact, they would not have true followers if no one followed their lead here. 

While an authoritarian leader might have some degree of competence, their excessive overestimation of their abilities and their fear of competent competition (even among those who serve them) will result in regular and often disastrous failures. Maintaining their delusions and lies in the face of failure requires explaining it away. One approach is denial, which is to ignore reality. A second approach is to blame others; the leader is not at fault, because someone else is responsible. One method of doing this is scapegoating, which is finding someone else to bear undeserved blame for the leader’s failings.

For the authoritarian, there is something of a paradox here. They must affirm their greatness and at the same time blame vastly inferior foes who manage to thwart them. These opponents must be both pathetic and exceptionally dangerous, stupid and yet brilliant, incompetent and yet effective and so on for a host of inconsistent qualities.

An authoritarian leader obviously desires followers and fortunately for them, there are people of the authoritarian follower type. While opportunists often make use of authoritarian leaders and assist them, they are not believers. The authoritarian follower believes that their leader is special, that the leader alone can fix things. Thus, the followers must buy into the leaders’ delusions and lies, convincing themselves despite the evidence to the contrary. And this is very dangerous.

Since the leader will tend to fail often, the followers must accept the explanations given to account for them. This requires rejecting facts and logic.  The followers embrace lies and conspiracy theories, whatever supports the narrative of their leader’s greatness. Those who do not agree with the leader are not merely wrong but are enemies of the leader and thus enemies of the followers.  The claims of those who disagree are rejected out of hand, often with hostility and insults. Thus, the followers tend to isolate themselves epistemically, which is a fancy way of saying that nothing that goes against their view of the leader ever gets in. This motivates a range of fallacies including what I call an accusation of hate.

In the last pandemic, when I tried to discuss COVID-19 with Trump supporters, it almost always ended with them accusing me of hating Trump and their rejecting anything I said that did not match Trump’s claims. I think they were sincere. Like everyone, they tend to believe and reject claims based on how they feel about the source. Since they like Trump, they believed him even when the evidence contradicted his claims. Since I disagreed with Trump’s false claims, they concluded I must have hated Trump. Otherwise, I would believe his claims. As they saw it, this also meant that I was wrong. While this makes psychological sense, it is bad logic and can be presented as a fallacy: the accusation of hate. It has this form:

 

Premise 1: Person A rejects Person B’s claim C.

Premise 2: Person A is accused of hating B.

Conclusion: Claim C is true.

 

As my usual silly math example shows, this is bad logic:

 

Premise 1: Dave rejects Adolph’s claim that 2+2=7.

Premise 2: Dave hates Adolph.

Conclusion: So, 2+2=7. 

 

While hating someone would be a biasing factor, this does not disprove the alleged hater’s claim. It can have psychological force since people tend to reject claims made by people they think hate someone they like. This is especially true in the case of authoritarian followers defending their leader.

Since authoritarian leaders are often delusional liars who often, deny these failures and scapegoat others, they are extremely dangerous. The more power they have, the more harm they can do. They are enabled by their followers, which makes them dangerous as well. In a democracy the solution is to vote out the authoritarian and get a leader who does not live in a swamp of lies and delusions. Until then, non-authoritarian leaders must step up to make rational decisions based on truth and good science; otherwise the next pandemic will drive America into ruin while lies and delusions are spun.

During the next pandemic, accurate information will be critical to your wellbeing and even survival. Some sources will mean well but will unintentionally spread misinformation. Malicious sources will be spreading disinformation. While being an expert in a relevant field is the best way to sort out which sources to trust, most of us are not experts in these areas. But we are not helpless. While we cannot become medical experts overnight, you can learn skills for assessing sources.

When you accept a claim based on the (alleged) expertise of a source, you are using an argument from authority. Despite its usefulness it is a relatively weak argument. Because you do not have direct evidence for the claim, you are relying on the source to be both accurate and honest. Despite the inherent weakness in this argument, a true expert is more likely to be right than wrong when making considered claims within their area of expertise. While the argument is usually presented informally, it has the following structure:

 

Premise 1: A is (claimed to be) an authority on subject S.

Premise 2:  A makes claim C about subject S.

Conclusion: Therefore, claim C is true.

 

As an informal example, when you believe what your doctor or HVAC technician claims, you are using an argument from authority.  But how do you know when an alleged authority really is an expert? Fortunately, there are standards you can use even if you know little or nothing about the claim. To the degree that the argument meets the standards, then it is reasonable to accept the conclusion. If the argument does not meet the standards, it would be a fallacy (a mistake in logic) to accept the conclusion. It would also be a fallacy to reject the conclusion because the appeal to authority was fallacious. This is because poor reasoning can still have a true conclusion; rather like how someone can guess the right answer to a math problem. Here are the standards for assessment.

First, the person must have sufficient expertise in the subject. A person’s expertise is determined by their relevant education (formal and otherwise), experience, accomplishments, reputation, and position. These should be carefully assessed to consider how well they establish expertise. For example, a person might be the head of a government agency because of family connections or political loyalty rather than ability or knowledge. The degree of expertise required also varies with the context. For example, someone who has completed college biology courses could be considered an expert when they claim that a virus replicates in living creatures by hijacking the cell mechanism. But a few college courses in biology would not make them an expert in epidemiology.

Second, the claim must be in the person’s area of expertise.  Expertise in one area does not automatically confer expertise in another. For example, being a world-renowned physicist does not automatically make a person an expert on morality or politics. Unfortunately, this is often overlooked or intentionally ignored. Actors and musicians, for example, are often accepted as experts in fields beyond their artistic expertise. Billionaires are also often wrongly regarded as experts in many areas based on the mistaken view that being rich entails broad expertise. This does not mean that their claims outside their field are false, just that they lack the expertise to provide a good reason to accept the claim.

Third there needs to be an adequate degree of agreement among the other experts in the field. If there is no adequate agreement it would be a fallacy to appeal to the disputing experts. This is because for any claim made by an expert there will be a counterclaim by another qualified expert. In such cases appealing to the authorities would be futile.

That said, no field has complete agreement, so a certain degree of dispute is acceptable when using this argument. How much is acceptable is a matter of debate, but the majority view of the qualified experts is what it is rational to believe. While they could turn out to be wrong, they are more likely to be right. Even if there is broad consensus, non-experts often make the mistake of picking a dissenting expert they agree with as being right. This is not good reasoning; agreeing with an expert is not a logical reason to believe they are right.

Fourth, the expert must not be significantly biased. Examples of biasing factors include financial gain, political ideology, sexism, and racism. A person’s credibility is reduced to the degree that they are biased. While everyone has biases, it becomes a problem when the bias is likely to unduly influence the person. For example, a doctor who owns a company that produces anti-viral medication could be biased when making claims about the efficacy of the medication. But while bias is a problem, it would also be a mistake to reject a person’s claim solely because of alleged bias. After all, a person could resist their biases and even a biased person can be right. Going with the anti-viral example, rejecting the doctor’s claim that it works because they can gain from its sale would be an ad hominem fallacy. While unbiased experts can be wrong, an unbiased expert is more credible than a biased expert—other factors being equal. 

Fifth, the area of expertise must be a legitimate area or discipline. While there can be debate about what counts as a legitimate area, there are clear cases. For example, if someone claims to be an expert in magical healing crystals and recommends using magic quartz to ward off Ebola, then it would be unwise to accept their claim.  In contrast, epidemiology is a legitimate field.

Sixth, the authority must be identified. If a person says a claim is true based on an anonymous expert, there is no way to tell if that person is a real expert. This does not make the claim false (to think otherwise would be a fallacy) but without the ability to assess the unnamed expert, you have no way of knowing if they are credible.  In such cases, suspending judgment can be a rational option. As would be expected, unnamed experts are often used on social media, and it is wise to be even more wary about anything on those. It is also wise to be wary of false attributions; for example, someone might circulate false claims and attribute them to a credible expert.

Finally, the expert needs to be honest and trustworthy. While being honest means that a person is saying what they think is true, it does not follow that they are correct. But an honest expert is more credible than a source that is inclined to dishonesty. But to infer that a dishonest source must be wrong would also be an error. After all, the dishonest source might be right this time perhaps while believing they are lying.

While these standards have been presented in terms of assessing individuals, the same standards apply to institutions and groups. While this is true of individuals, you should update your assessments of groups as they change over time. For example, a federal agency that was staffed by experts and headed by and expert would be trustworthy; but if that agency was gutted and had its personal replaced with political loyalists, then it would now lack authority.