In the previous essay on threat assessment, I looked at the influence of availability heuristics and fallacies related to errors in reasoning about statistics and probability. This essay continues the discussion by exploring the influence of fear and anger on threat assessment.

A rational assessment of a threat involves properly considering how likely it is that a threat will occur and, if it occurs, how severe the consequences might be. As might be suspected, the influence of fear and anger can cause people to engage in poor threat assessment that overestimates the likelihood or severity of a threat.

One starting point for anger and fear is the stereotype. Roughly put, a stereotype is an uncritical generalization about a group. While stereotypes are generally thought of as being negative (that is, attributing undesirable traits such as laziness or greed), there are also positive stereotypes. They are not positive in that the stereotyping itself is good. Rather, the positive stereotype attributes desirable qualities, such as being good at math or skilled at making money. While it makes sense to think that stereotypes that provide a foundation for fear would be negative, they often include a mix of negative and positive qualities. For example, a feared group might be cast as stupid and weak, yet somehow also incredibly cunning and dangerous.

Stereotyping leads to similar mistakes that arise from hasty generalizations in that reasoning about a threat based on stereotypes will often result in errors. The defense against a stereotype is to seriously inquire whether the stereotype is true or not.

Stereotyping is useful for demonizing. Demonizing, in this context, involves unfairly portraying a group as evil and dangerous. This can be seen as a specialized form of hyperbole in that it exaggerates the evil of the group and the danger it represents. Demonizing is often combined with scapegoating—blaming a person or group for problems they are not responsible for. A person can demonize on their own or be subject to the demonizing rhetoric of others.

Demonizing presents a clear threat to rational threat assessment. If a group is demonized successfully, it will be (by definition) seen as eviler and more dangerous than it really is. As such, both the assessment of the probability and severity of the threat will be distorted. For example, the demonization of Muslims by various politicians and pundits distorts threat assessments.

The defense against demonizing is like the defense against stereotypes—a serious inquiry into whether the claims are true. It is worth noting that what might seem to be demonizing might be an accurate description. This is because demonizing is, like hyperbole, exaggerating the evil of and danger presented by a group. If the description is true, then it would not be demonizing. Put informally, describing a group as evil and dangerous need not be demonizing. For example, descriptions of Isis as evil and dangerous were generally accurate. As are descriptions of evil and dangerous billionaires.  

While stereotyping and demonizing are rhetorical devices, there are also fallacies that distort threat assessment. Not surprisingly, one is scare tactics (also known as appeal to fear). This fallacy involves substituting something intended to create fear in the target in place of evidence for a claim. While scare tactics can be used in other ways, it can be used to distort threat assessment. One aspect of its distortion is the use of fear—when people are afraid, they tend to overestimate the probability and severity of threats. Scare tactics is also used to feed fear—one fear can be used to get people to accept a claim that makes them even more afraid.

One thing that is especially worrisome about scare tactics in the context of terrorism is that in addition to making people afraid, it is also routinely used to “justify” encroachments on rights, massive spending, and the abandonment of moral values. While courage is an excellent defense against this fallacy, asking two important questions also helps. The first is to ask, “should I be afraid?” and the second is to ask, “even if I am afraid, is the claim actually true?” For example, scare tactics has been used to “support” the claim that refugees should not be allowed into the United States. In the face of this tactic, one should inquire whether or not there are grounds to be afraid of refugees and also inquire into whether or not an appeal to fear justifies banning refugees.

It is worth noting that just because something is scary or makes people afraid it does not follow that it cannot serve as legitimate evidence in a good argument. For example, the possibility of a fatal head injury from a motorcycle accident is scary but is also a good reason to wear a helmet. The challenge is sorting out “judgments” based merely on fear and judgments that involve good reasoning about scary things.

While fear makes people behave irrationally, so does anger. While anger is an emotion and not a fallacy, it does provide the fuel for the appeal to anger fallacy. This fallacy occurs when something that is intended to create anger is substituted in place of evidence for a claim. For example, a demagogue might work up a crowd’s anger at illegal migrants to get them to accept absurd claims about building a wall along a massive border.

Like scare tactics, the use of an appeal to anger distorts threat assessment. One aspect is that when people are angry, they tend to reason poorly about the likelihood and severity of a threat. For example, a crowd that is enraged against illegal migrants might greatly overestimate the likelihood that the migrants are “taking their jobs” and the extent to which they are “destroying America.” Another aspect is that the appeal to anger, in the context of public policy, is often used to “justify” policies that encroach on rights and do other harms. For example, when people are angry about a mass shooting, proposals follow to limit gun rights that had no relevance to the incident in question. As another example, the anger at illegal migrants is often used to “justify” policies that will harm the United States. As a third example, appeals to anger are often used to justify policies that would be ineffective at addressing terrorism and would do far more harm than good.

It is important to keep in mind that if a claim makes a person angry, it does not follow that the claim cannot be evidence for a conclusion. For example, a person who learns that her husband is having an affair with an underage girl would probably be very angry. But this would also serve as good evidence for the conclusion that she should report him to the police and divorce him. As another example, the fact that illegal migrants are here illegally and knowingly employed by businesses because they can be more easily exploited than American workers can make someone mad, but this can also serve as a premise in a good argument in favor of enforcing (or changing) the laws.

One defense against appeal to anger is good anger management skills. Another is to seriously inquire into whether there are grounds to be angry and whether any evidence is offered for the claim. If all that is offered is an appeal to anger, then there is no reason to accept the claim based on the appeal.

The rational assessment of threats is important for practical and moral reasons. Since society has limited resources, rationally using them requires considering the probability of threats rationally—otherwise resources are being misspent. There is also the concern about the harm of creating fear and anger that are unfounded. In addition to the psychological harm to individuals that arise from living in fear and anger, there is also the damage stereotyping, demonizing, scare tactics and appeal to anger do to society. While anger and fear can unify people, they most often unify by dividing—pitting us against them. I urge people to think through threats rather than giving in to the seductive demons of fear and anger.

When engaged in rational threat assessment, there are two main factors that need to be considered. The first is the probability of the threat. The second is the severity of the threat. These two can be combined into one sweeping question: “how likely is it that this will happen and, if it does, how bad will it be?”

Making rational decisions about dangers involves considering both factors. For example, consider the risks of going to a crowded area such as a movie theater or school. There is a high probability of being exposed to the cold virus, but it is (for most people) not a severe threat. There is a low probability that there will be a mass shooting on my campus, but it is a high severity threat.

Our survival as a species seems to have been despite our poor skills at rational threat assessment. To be specific, the worry people feel about a threat generally does not match up with the probability of the threat occurring. People seem somewhat better at assessing severity, though we often get this wrong.

One excellent example of poor threat assessment is the fear Americans have about terrorism.  Between 1975 and 2025 3,577 Americans died as the result of terrorism, which accounted for .35% of all murders in the US in that time. If you are in the United States now, your odds of being killed in such an attack is about 1 in 4 million per year. This includes all forms of terrorism, although you would now be statistically most likely to be killed by right-wing terrorists.

While being killed by terrorists in the United States is unlikely, some people are terrified by the possibility (which is, of course, the goal of terrorism). Given that an American is more likely to be killed while driving than by a terrorist, it might be wondered why people are so bad at threat assessment. The answer, in terms of feeling fear vastly out of proportion to probability, involves a cognitive bias and some classic fallacies.

People (probably) follow general rules when they estimate probabilities and the ones we use unconsciously are called heuristics. While the right way to estimate probability is to use statistical methods, people often fall victim to the availability heuristic. This is when a person unconsciously assigns a probability based on how often they think of something. While something that occurs often is likely to be thought of often, thinking of something more often does not make it more likely to occur.

After an act of terrorism, people think about terrorism more often and tend to unconsciously believe that the chance of terror attacks occurring is higher than it really is. To use a non-terrorist example, when people hear about a shark attack, they tend to think that the chances of it occurring are high—even though the probability is low (driving to the beach is much more likely to kill you). The defense against this bias is to find reliable statistical data and use that as the basis for inferences about threats—that is, think it through rather than trying to feel through it. This is, of course, difficult: people tend to regard their feelings, however unwarranted, as the best evidence—despite usually being the worst evidence.

People are also misled about probability by fallacies. One is the spotlight fallacy. The spotlight fallacy is committed when a person uncritically assumes that all (or many) members or cases of a certain type are like those that receive the most attention or coverage in the media. After an incident involving terrorists who are Muslim, media attention will focus on that fact, often leading people who are poor at reasoning to infer that most Muslims are terrorists. This is the exact sort of mistake that would occur if it were inferred that most veterans are terrorists because the media covered a terrorist who was a military veteran. If people believe that, for example, most Muslims are terrorists, then they will make incorrect inferences about the probability of a terrorist attack by Muslims in the United States. This is distinct from someone simply lying about, for example, Muslims and claiming they are terrorists because the person is a bigot or wants to exploit the fear they create.

Anecdotal evidence is another fallacy that contributes to poor inferences about the probability of a threat. This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy also occurs when someone rejects reasonable statistical data supporting a claim in favor of one example or small number of examples that go against the claim. This fallacy is like hasty generalization and a similar sort of error is committed, namely drawing an inference based on a sample that is inadequate in size relative to the conclusion. The main difference between hasty generalization and anecdotal evidence is that the fallacy anecdotal evidence involves using a story (anecdote) as the sample. Out in the wild, it can be difficult to tell whether a fallacy is a hasty generalization or anecdotal evidence, fortunately what matters is recognizing a fallacy is a fallacy even if it is not clear which one it is.

People fall victim to this fallacy because stories and anecdotes usually have more emotional and psychological impact than statistical data. This leads people to infer that what is true in an anecdote must be true of the whole population or that an anecdote justifies rejecting statistical evidence in favor of an anecdote. Not surprisingly, people most often accept this fallacy because they want to believe that what is true in the anecdote is true for the whole population.

In the case of terrorism, people use both anecdotal evidence and hasty generalization: they point to a few examples of terrorism or tell a story about a specific incident, and then draw an unwarranted conclusion about the probability of a terrorist attack occurring in the United States. For example, people point out that terrorists have masqueraded as refugees and infer that refugees in general present a major threat to the United States. Or they might tell the story about one attacker in San Bernardino who arrived in the states on a K-1 (“fiancé”) visa and make unwarranted conclusions about the danger of the entire visa system.

One last fallacy is misleading vividness. This occurs when a very small number of particularly dramatic events are taken to outweigh statistical evidence. This sort of “reasoning” is fallacious because the mere fact that an event is exceptionally vivid or dramatic does not make the event more likely to occur, especially in the face of statistical evidence to the contrary.

People often accept this sort of “reasoning” because particularly vivid or dramatic cases usually make a very strong impression on the mind. For example, mass shootings are vivid and awful, so it is hardly surprising that people often feel they are very much in danger from such attacks. Another way to look at this fallacy in the context of threats is that a person conflates the severity of a threat with its probability. That is, the worse the harm, the more likely a person feels that it will occur. But the vividness of a harm has no connection to the probability it will occur.

That said, considering the possibility of something dramatic or vivid occurring is not always fallacious. For example, a person might decide to never go sky diving because hitting the ground because of a failed parachute would be very dramatic. If he knows that, statistically, the chances of the accident happening are very low but he considers even a small risk unacceptable, then he would not be committing this fallacy. This then becomes a matter of value judgment—how much risk a person is willing to tolerate relative to the severity of the potential harm.

The defense against these fallacies is to use a proper statistical analysis as the basis for inferences about probability. As noted above, there is still the psychological problem: people tend to act on the basis on how they feel rather than what the facts show.

Such rational assessment of threats is important for both practical and moral reasons. The matter of terrorism is no exception to this.  Since society has limited resources, rationally using them requires considering the probability of threats rationally—otherwise resources are being misspent. For example, spending billions to counter an unlikely threat while spending little on major causes of harm would be irrational (if the goal is to protect people from harm). There is also the concern about the harm of creating unfounded fear. In addition to the psychological harm to individuals, there is also the damage to the social fabric. While creating unwarranted fear is useful for grifters, pundits and politicians, it is bad for the rest of us and thinking things through is a way to protect yourself from needless fear and those who wish to exploit it.  

A Philosopher’s Blog 2025 brings together a year of sharp, accessible, and often provocative reflections on the moral, political, cultural, and technological challenges of contemporary life. Written by philosopher Michael LaBossiere, these essays move fluidly from the ethics of AI to the culture wars, from conspiracy theories to Dungeons & Dragons, from public policy to personal agency — always with clarity, humor, and a commitment to critical thinking.

Across hundreds of entries, LaBossiere examines the issues shaping our world:

  • AI, technology, and the future of humanity — from mind‑uploading to exoskeletons, deepfakes, and the fate of higher education
  • Politics, power, and public life — including voting rights, inequality, propaganda, and the shifting landscape of American democracy
  • Ethics in everyday life — guns, healthcare, charity, masculinity, inheritance, and the moral puzzles hidden in ordinary choices
  • Culture, identity, and conflict — racism, gender, religion, free speech, and the strange logic of modern outrage
  • Philosophy in unexpected places — video games, D&D, superheroes, time travel, and the metaphysics of fictional worlds

Whether he is dissecting the rhetoric of conspiracy theories, exploring the ethics of space mining, or reflecting on the death of a beloved dog, LaBossiere invites readers into a conversation that is rigorous without being rigid, principled without being preachy, and always grounded in the belief that philosophy is for everyone.

This collection is for readers who want more than hot takes — who want to understand how arguments work, why beliefs matter, and how to think more clearly in a world that rewards confusion.

Thoughtful, wide‑ranging, and often darkly funny, A Philosopher’s Blog 2025 is a companion for anyone trying to make sense of the twenty‑first century.

 

Available for $2.99 on Amazon

 

 

 

Power holders in the United States tend to be white, male, straight, and (profess to be) Christian. Membership in these groups also seems to confer a degree of advantage relative to people outside of these groups. Yet, as been noted in the previous essays, some claim that the people in these groups are now the “real victims.” In this essay I will look at how a version of the fallacy of anecdotal evidence can be used to “argue” about who is “the real victim.”

The fallacy of anecdotal evidence is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a small number of cases. The fallacy is also committed when someone rejects reasonable statistical data supporting a claim in favor of a single example or small number of examples that go against the claim. The fallacy is sometimes taken to be a version of the hasty generalization fallacy (drawing a conclusion from a sample that is too small to adequately support that conclusion). The main difference between hasty generalization and anecdotal evidence is that the fallacy anecdotal evidence involves using a story (anecdote) as the sample.

Here is the form of the anecdotal evidence fallacy often used to “argue” that an advantaged group is not advantaged:

 

Premise 1: It is claimed that statistical evidence shows that Group A is advantaged relative to Group B

Premise 2: A member of Group A was disadvantaged relative to a member of Group B.

Conclusion: Group A is not advantaged relative to Group B (or Group B is not disadvantaged relative to Group A).

 

 

To illustrate:

 

Premise 1: It is claimed that statistical evidence shows that white Americas are advantaged relative to black Americans.

Premise 2: Chad, a white American, was unable to get into his first choice of colleges because affirmative action allowed Anthony, a black American, to displace him.

Conclusion: White Americans are not advantaged relative to black Americans.

 

The problem with the logic is that an anecdote does not suffice to establish a general claim because an adequately large sample is needed to make a strong generalization. But one must also be on guard against another sort of fallacy:

 

Premise 1: It is claimed that statistical evidence shows that Group A is advantaged relative to Group B.

Premise 2: Member M of Group A is disadvantaged relative to Member N of Group B.

Conclusion: The disadvantage of M is morally acceptable, or M is not really disadvantaged.

 

To illustrate:

 

Premise 1: It is claimed that statistical evidence shows that men are advantaged relative to women.

Premise 2: Andy was disadvantaged relative to his boss Sally when she used her position to sexually harass him.

Conclusion: The disadvantage of Andy is morally acceptable, or Andy was not really disadvantaged.

 

 

While individual cases do not disprove a body of statistical evidence they should not be ignored. As in the illustration given above, while men generally have a workplace advantage over women, this does not entail that individual men are never at a disadvantage relative to individual women. It also does not entail that, for example, men cannot be the victims of sexual harassment by women.  As another illustration, while white men dominate academics, business, and politics, this does not entail that there are not injustices against specific white men in such things as admission, hiring and promotions. These sorts of situations can lead to moral debates about harm.

One excellent example is the debate over affirmative action. An oversimplified justification is that groups that have been historically disadvantaged are given a degree of preference in the selection process. For example, a minority woman might be given preference over a white woman in the case of college admission. The usual moral counter is that the white woman is wronged by this: if she is better qualified, then she should be admitted, even if this entails that the college population will remain almost entirely white.

The usual counter to this is that the white woman is likely to appear better qualified because she has enjoyed the advantages conferred from being white. For example, her ancestors might have built wealth by owning the ancestors of the black woman who was admitted over her and this inherited wealth meant that her family has been attending college for generations, that she was able to attend excellent schools, and that her family could pay for tutoring and test preparation.

This can be countered by other arguments, such as how the woman did not own slaves herself, so it is unfair for her to not be admitted on the “merit” arising from all these advantages arising from generational wealth. One can, of course, consider scenarios such as cases in which the black woman is from a wealthy family while the white woman is from a poor family. Such cases can, of course, be considered in terms of economic class and one could argue that class should also be a factor. This obviously all leads to the moral issue of whether it is acceptable to inflict some harm on specific members of advantaged groups to address systematic disadvantages, which goes way beyond the scope of this essay.

Fortunately, I do not need to settle this issue here. This is because even if such anecdotes are examples of morally wrong actions, they do not disprove the general statistical claims about relative advantages and disadvantages between groups. For example, even if a few white students are wronged by affirmative action when they cannot attend their first pick of schools, these anecdotes do not disprove the statistical evidence of the relative advantage conferred by being white in America. After all, the claim of advantage is not that each white person is always advantaged over everyone else on an individual-by-individual basis. Rather it is about the overall advantages that appear in statistics such as wealth and treatment by the police. As such, using anecdotes to “refute” statistical data is, as always, a fallacy. But what about cases in which members of an advantaged group do suffer a statistically meaningful disadvantage in one or more areas?

While falling victim to the fallacy of anecdotal evidence is bad logic, it is not an error to consider that members of an advantaged group might face a significant disadvantage (or harm) because of their membership in that advantaged group. As would be expected, any example used here will be controversial. I will use the Fathers’ Rights movement as the example. The central claim behind this movement is that fathers are systematically disadvantaged relative to mothers. While there are liberal and conservative versions, the general claim is that fathers and mothers should have parity in the legal system on this matter. Critics, as would be expected, claim that men tend to already enjoy a relative advantage here. But if the Fathers’ Rights movement is correct about fathers being systematically disadvantaged relative to mothers, then this would not be mere anecdotal reasoning. That is, it would not just be a few cases in which individual fathers were disadvantaged relative to a few individual mothers, it would be systematic injustice. But would this area of relative disadvantage disprove the claim of general advantage? Let us look at the reasoning:

 

Premise 1: It is claimed that statistical evidence shows that Group A is advantaged relative to Group B.

Premise 2: But Group A is disadvantaged relative to Group B in specific area C.

Conclusion: Group A is not advantaged relative to Group B.

 

As presented, this would be an error in reasoning because Group A being disadvantaged in one area would not prove that the group is not advantaged relative to Group B when all areas are considered. To use an analogy, the fact that Team B outscored Team A in the fifth inning of a baseball game does not entail that B is leading. It must be noted that a similar argument with multiple premises like Premise 2 could show that Group A is not advantaged relative to Group B. After all, establishing adequate statistical evidence would obviously be adequate. There are, of course, questions about how to determine relative advantage and these can be debated in good faith. One obvious point of dispute would be the matter of weighting. For example, if fathers are disadvantaged relative to mothers, how would this count relative to the pay gap between men and women? And so on for all areas of comparison. This does show the need to consider each area as well as a need for assessing value but this is not unique to the situation at hand and one could, as is often done, assign crude dollar values to do the math.

In closing, while individual wrongs and wrongs done to members of advantaged groups as members of that group can occur, they do not automatically disprove the statistical data. 

As noted in previous essays, a tactic used by critics of capitalism is to accuse them of envy. As an argument, the Accusation of Envy is a fallacy. However, as was noted in the previous essay, a person’s envy could bias them and impact their credibility. Even when envy is relevant to credibility, proof of envy has no relevance to the truth of the person’s claims or the quality of their arguments. But from a rhetorical standpoint, such attacks can be effective: if someone is convinced another person is envious, they will often dismiss claims and arguments for psychological rather than logical reasons. Some people also enjoy attacking those they disagree with and casting them as being corrupted by vices. So, how would one tell if another person is envious?

My rough account of envy is that it involves an improper desire for what someone else has and the feeling includes an unwarranted resentment towards the possessor of the desired thing.  It often includes the desire to unjustly take it from the other person. An envious person would tend to be unable to get what they desire. If they could, they would presumably cease their envy (though they might become jealous). Determining if a person is envious would require assessing a person in terms of these factors in a fair and objective way.

A central part of the assessment is determining if the person has an improper desire for what someone else has. If a person shows no interest in the alleged object of desire, the accusation of envy would seem unwarranted. Even if a person is interested, it must be shown there is a defect in their desire and that unwarranted resentment is present.  As an illustration, consider the difference between training to be as good a basketball player as Jordan because he is an athlete you respect and bitterly begrudging his ability because you wish you had his talent.

 Discerning the presence of unwarranted resentment involves assessing the person’s words and deeds relative to the target of the alleged envy. Due caution must be taken to distinguish criticism and even anger from unwarranted resentment. Consider the difference between being justly angry at someone who harmed you and being unjustly resentful of someone who has done well in an area where you have failed. If a fair and objective assessment shows that the person is suffering from envy, then it would be reasonable to make that claim. But this would still be irrelevant to the truth of their claims and the quality of their arguments.

In some cases, people will make their envy clear: they will express bitter, yet unwarranted, resentment and have a record of failed attempts to acquire what they desire. They might even admit their envy. In other cases, it will be harder to determine if a person is envious. After all, strong criticism can resemble unwarranted resentment, and justified anger can arise from a string of unfair failures. For example, a person who tries to start a small business and is repeatedly driven out of business by corporations exploiting their unfair advantages could be seen as having righteous anger at an unfair system or cast as a failure who is envious. If a person does not show clear signs of envy or denies that they are envious, one evil rhetorical tactic that can be used is Secret Motive.

Secret Motive (or Real Motive) is a rhetorical technique in which a person is accused of having a secret, typically bad, motive for their claims, arguments or actions. That is, they are being accused of having a real motive that is wicked. This is often a set up for an ad hominem attack based on the alleged secret motive. For example, consider a critic of capitalism who denies they are envious of the rich and there is no good evidence to the contrary. An evil “solution” is to insist their real motivation is envy, despite the lack of evidence. The accuser often claims a special insight or understanding into the psychology of the accused and this is why they somehow know the person’s secret motive despite being able to provide evidence for their claim. While primarily a rhetorical device (and hence not an argument) it can also be cast as a fallacy:

 

Premise 1: Person A asserts that person B has a secret (or real) motive.

Conclusion: B has a secret motive.

 

The error occurs when A fails to provide adequate evidence for their conclusion. This is not to say that “evidence” will never be provided; but what is offered fails to support their claim. For example, the “evidence” of envy might be that the person has been critical of the rich, though they have never expressed resentment at wealth earned fairly and have never exhibited interest in becoming rich. But the accuser somehow “knows” the accused is secretly envious, apparently through some exceptional epistemic abilities. Aside from dishonesty, one possible motivation is that the accuser honestly cannot conceive of anyone being critical of capitalism for a good reason. Hence, they infer there must be a secret wicked motive. But it is more likely the accuser knows there are good criticism of capitalism and to not accuse the critic of wicked motives would be to acknowledge this.

The defense against this technique is objectively assessing whether adequate evidence exists for the accusation of the secret motive. If not, the claim should not be accepted. It must also be remembered that even if a person has a bad (secret or not) motive, this is irrelevant to the truth of their claims and the quality of their arguments.

As noted in my previous essay, critics of capitalism are often accused of being envious or Marxists. As shown in that essay, even if a critic is envious, it is fallacious to conclude their criticism is therefore wrong. But it could be argued that a person’s envy can bias them and diminish their credibility. I will look at this and examine envy. I will then engage in some self-reflection on whether I am envious.

Since envy involves resentment, an envious person could have a bias and see who they envy in an unwarranted or negative manner. This might occur for a variety of reasons, such as a desire to explain away their own failures or feel better by attributing negative qualities to who they envy. For example, a person who envies the rich might explain their own lack of wealth in terms of the machinations of the wealthy and “the system” while seeing the rich as suffering from greed, dishonesty and corruption. Thus, it is an envious person could be biased against those they envy.  If such a bias exists, then the envious person’s credibility would be reduced in proportion to their bias. This is because they would be more inclined to accept negative claims about those they envy. So, it would be rational to consider the influence of bias when assessing claims.

But the mere possibility of bias is not proof of bias, there would need to be evidence the person 1) is envious and 2) is biased by this envy. If this evidence exists, then we should consider the impact of this bias on the person’s claims. This approach can have merit in the context of the Argument from Authority.

An Argument from Authority occurs when it is argued that a claim should be accepted because the person making it is an authority (expert) on the subject. It has this form:

 

Premise 1: Person A is an authority(expert) on subject S.

Premise 2: A says P about S.

Conclusion: P is true.

 

This inductive argument is assessed in terms of the quality of the expert, and this includes considering whether the expert is significantly biased. If an expert is biased to a degree that would render them untrustworthy, then accepting a biased claim from them would be an error of logic. If I were so envious of the rich that I was significantly biased against them, then unsupported claims I make about them should not be accepted as true based on my (alleged) expertise.

But even if someone is envious and extremely biased, this would not disprove their claims since claims stand or fall on their own. To think otherwise would be to fall into the Accusation of Envy fallacy discussed in the previous essay. The logical response to bias is not to reject the claims, but to subject them to scrutiny. Even if I was extremely envious of the rich, it would not follow that my claims about capitalism are false, and they would need to be assessed on their own merit.  But am I envious of the rich? To answer this, I need to consider the nature of envy.

At its core, envy involves wanting what someone else has. This can range from a possession (such as money) to a quality (such as being a fast runner). But merely wanting what someone else has is not the defining feature of envy. You might want to have artistic skills to match Rembrandt, but this need not make you envious. Envy includes a resentment towards the possessor of the desired thing and often includes a desire to take it. But even this does not properly capture envy. Suppose that you start a business with a trusted friend, but they betray you and flee the country with your money. You want the money, you resent that they have it, and you desire that it be taken away from them. But it would be incorrect to say that you are envious of them. More must be added to complete the recipe for envy.

One plausible addition is that resentment must be unwarranted, and the desire is improper in some relevant way. In the case of the hypothetical betrayer, your resentment is warranted and the desire for your money is proper. Establishing a claim of envy would thus require showing that a person wants what another has, that they unjustly resent that the other person has it, and that there is something improper about their desire for it. Envy also often involves an inability by the envious to get what they desire. If they could, they would not be envious.  The envious person thus suffers from a series of moral failings relative to their desire. While this is hardly a necessary and sufficient definition of “envy” it should suffice for sorting out whether I am envious of the rich.

To be envious of the rich, I would need to want to be rich.  I would also need an unjust resentment of the rich, an improper desire to be rich, and perhaps a desire that the rich no longer be rich and I would need to lack the ability to become rich. Let us walk through each of these in turn.  While I want to have some money (food and running shoes are not free), I do not desire to be rich. As for evidence, my life choices have not been aimed at becoming rich. For example, I earned my doctorate in philosophy and then became a professor. I could have picked a much more lucrative degree and profession.  While I write books, these are in philosophy and gaming rather than more profitable areas. If I wanted to be one of the rich, I would have been going about it in an ineffective way. But it could be contended that while I want to be rich, I lack the ability and have continuously been stuck in an “inferior” life.

The easy and obvious reply is that since I had the ability to complete a doctorate, I also had the ability to complete a far more lucrative advanced degree. Given that I was a college athlete and still train regularly despite numerous injuries, I can stick to challenging tasks and persist through difficulties. While it would be immodest to go through my strengths and accomplishments, suffice it to say that I could have certainly succeeded in a career far more profitable than being a professor if I desired to be rich. I am not saying that I would be rich; simply that if I wanted to be rich, I could have put myself on a path far more likely to achieve that result than a career in philosophy. If being rich was my goal, I would have tried. If I had turned out to be a bitter failure, then a charge of envy might have some merit. But to say that I am envious of what I never aimed for is a bizarre claim. One could claim some secret knowledge of my motives, but that would be unsupported speculation.

I do not unjustly resent the rich who have earned their wealth, such as by working hard in a demanding job. I do  have a negative view towards those who have acquired wealth unjustly, who use their wealth to the detriment of others, or who squandered the opportunities their wealth afforded them. I do believe that the current system is unfair, but I do not feel indignation that I have been treated unfairly. Rather I feel moral anger at the harmful aspects of the economic system we have created and perpetuate.

I do think that the rich should have less wealth, that they should contribute more and do better with their wealth. But I also think that everyone should use more of their resources to do good, me included. Like most people, I do not always live up to my moral ideals. But I do not want the rich to be stripped of their wealth and left poor. Being poor in America is a terrible thing, so I do not want anyone to be poor. 

As such, while I do have a negative view of some rich people and I have serious criticisms of the current economic system, I do not envy the rich. And even if I did, this would be irrelevant to any criticism I make.

The question “why lie if the truth would suffice” can be interpreted in at least three ways. One is as an inquiry about the motivation and asks for an explanation. A second is as an inquiry about weighing the advantages and disadvantages of lying. The third way is as a rhetorical question that states, under the guise of inquiry, that one should not lie if the truth would suffice.

Since a general discussion of this question would be rather abstract, I will focus on a specific example and use it as the basis for the discussion. Readers should, of course, construct their own examples using their favorite lie from those they disagree with. I will use Trump’s response to the Democrats’ Green New Deal as my example. While this is something of a flashback to his first term, Trump recently signed an executive order targeting the old Green New Deal.

In 2019 the Democrats proposed a Green New Deal aimed at addressing climate change and economic issues. As with any proposal, rational criticisms can be raised against it. In his first term, Trump claimed the Democrats intend “to permanently eliminate all Planes, Cars, Cows, Oil, Gas & the Military – even if no other country would do the same.”  While there are some Democrats who would do these things, the Democratic Party favors none of that. Looked at rationally, it seems to make no sense to lie about the Green New Deal. If it is bad enough to reject on its own defects, lies would not be needed. If one must lie to attack it, this suggests a lack of arguments against it. To use an analogy, if a prosecutor lies to convict a person, this suggests they have no case—otherwise they would rely on evidence. So, why would Trump lie if the truth would suffice to show the Green New Deal is a terrible plan?

The question of why Trump (or anyone else) lies when the truth would suffice is a matter for psychology, not philosophy. So, I will leave that question to others. This leaves me with the question about the advantages and disadvantages of lying along with the rhetorical question.

The lie about the Green New Deal is a good example of hyperbole and a straw man. Trump himself claims to use the tactic of “truthful hyperbole”. Hyperbole is a rhetorical device in which one makes use of extravagant overstatement, such as claiming that the Democrats plan to eliminate cows. The reason hyperbole is not just called lying is because it is a specific type of untruth and must have a foundation in truth. Hyperbole involves inflating or exaggerating something true rather than a complete fiction. The Green New Deal is aimed at making America carbon neutral and this would impact cars, cows, planes, oil, gas and the military. The extravagant exaggeration is that the proposal would eliminate all of them permanently. This would be as if someone proposed cutting back on dessert at family dinners and they were accused of wanting to eliminate meals permanently. Since hyperbole is rhetoric without logic, it has no logical force and does not prove (or disprove) anything. But it can have considerable psychological force in influencing people to believe a claim.

Hyperbole is often used in conjunction with the Straw Man fallacy. This fallacy is committed when a person’s actual position is ignored and a distorted, exaggerated or misrepresented version of that position is criticized in its place. This sort of “reasoning” has the following pattern:

 

Premise 1: Person A has position X.

Premise 2: Person B presents position Y (a distorted version of X).

Premise 3: Person B attacks position Y.

Conclusion: Therefore, X is false or bad.

 

This sort of “reasoning” is fallacious because attacking a distorted version of a position is not a criticism of the actual position. One might as well expect an attack on a poor drawing of a person to hurt the person.

Like hyperbole, the Straw Man fallacy is not based on a simple lie: it involves an exaggeration or distortion of something true. In the case of Trump and the Green New Deal, his “reasoning” is that the Green New Deal should be rejected because his hyperbolic straw man version of it is terrible. Since this is a fallacy, his “reasons” do not support his claim. It is, as always, important to note that Trump could be right about the Green New Deal being a bad idea, but not for the “reasons” he gives. To infer that a fallacy must have a false conclusion is itself a fallacy (the fallacy fallacy).

While hyperbole has no logical force and a straw man is a fallacy, there are advantages to using them. One advantage is that they are much easier than coming up with good reasons. Criticizing the Green New Deal for what it is requires knowing what it is and considering possible defects which take time and effort. Tweeting out a straw man takes seconds.

The second advantage is that hyperbole and straw men often work, often much better than the truth. In the case of complex matters, people rarely do their homework and do not know that a straw man is a straw man. I have interacted with people who honestly think Democrats plan to eliminate planes and cars. Since this is a bad idea, they reject it, not realizing that is not the Green New Deal. An obvious defense against hyperbole and straw man is to know the truth. While this can take time and effort, someone who has the time to post on Facebook or Twitter, has the time to do basic fact checking. If not, their ignorance should command them to remain silent, though they have the right to express their unsupported views.

As far as working better than the truth, hyperbole or straw man appeals to the target’s fears, anger or hope. They are thus motivated to believe in ways that truth cannot match. People generally find rational argumentation dull and unmoving, especially about complicated issues.  If Trump honestly presented real problems with the Green New Deal, complete with supporting data and graphs, he would bore most and lose his audience. By using a straw man, he better achieves his goal. This does allow for a pragmatic argument for lying because the truth will not suffice.

If telling the truth would not suffice to convince people, then there is the pragmatic argument that if lying would do the job, then it should be used. For example, if going into an honest assessment of the  Green New Deal would bore people and lying would get the job done, then Trump should lie if he wants to achieve his goal. This does, however, raise  moral concerns.

If the reason the truth would not suffice is because it does not logically support the claim, then it would be immoral to lie. To use a non-political example, if you would not invest in my new fake company iScam if you knew it was a scam, getting you to invest in it by lying would be wrong. So, if the New Green Deal would not be refuted by the truth, Trump’s lies about it would be immoral.

But, what about cases in which the truth would logically support a claim, but the truth would not persuade people to accept that claim? Going back to the Green New Deal example, suppose it is terrible but explaining its defects would bore people and they would remain unpersuaded. But a straw man version of the Green New Deal would persuade many people to reject this hypothetically terrible plan? From a utilitarian standpoint, the lie could be morally justified; if the good of lying outweighed the harm, then it would be the right thing to do. To use an analogy, suppose you were trying to convince a friend to not start a dangerous diet. You have scientific data and good arguments, but you know your friend is bored by data and is largely immune to logic. So, telling them the truth would mean that they would go on the diet and harm themself. But, if you exaggerate the harm dramatically, your friend will abandon the diet. In such a case, the straw man argument would seem to be morally justified as you are using it to protect your friend.

While this might seem to justify the general use of hyperbole and the straw man, it only justifies their use when the truth does suffice logically but does not suffice in terms of persuasive power. That is, the fallacy is only justified as a persuasive device when there are non-fallacious arguments that would properly establish the same conclusion.

Reasoning is like chainsaw: useful when used properly, but when used badly it can create a bloody mess. While this analogy can be applied broadly to logic, this essay focuses on the inductive generalization and how it can become a wayward chainsaw under the influence of fear. I’ll begin by looking at our good friend the inductive generalization.

Consisting of a premise and a conclusion, the inductive generalization is a simple argument:

 

Premise 1: P% of observed Xs are Ys.

Conclusion: P% of all Xs are Ys.

 

The quality of an inductive generalization depends on the quality of the first premise, which is usually called the sample. The larger and more representative the sample, the stronger the argument (the more likely it is that the conclusion will be true if the premise is true). There are two main ways in which an inductive generalization can be flawed. The first is when the sample is too small to adequately support the conclusion. For example, a person might have a run-in with a single bad driver from Ohio and conclude all Ohio drivers are terrible. This is known as the fallacy of hasty generalization.

The second is when there is a biased sample, one that does not represent the target population. For example, concluding that most people are Christians because everyone at a Christian church is a Christian would be a fallacy. This is known as the fallacy of biased generalization.

While these two fallacies are well known, it is worth considering them in the context of fear: the fearful generalization. On the one hand, it is not new: a fearful generalization is a hasty generalization or a biased generalization. On the other hand, the hallmark of a fearful generalization (that is fueled by fear) makes it worth considering, especially since addressing the fueling fear seems to be key to disarming this sort of poor reasoning.

While a fearful generalization is not a new fallacy structurally, it is committed because of the psychological impact of fear. In the case of a hasty fearful generalization, the error is drawing an inference from a sample that is too small, due to fear. For example, a female college student who hears about incidents of sexual harassment on campuses might, from fear, infer that most male students are likely to harass her. As another example, a person who hears about an undocumented migrant who commits a murder might, from fear, infer that many  undocumented migrants are murderers. Psychologically (rather than logically), fear fills out the sample, making it feel like the conclusion is true and adequately supported. However, this is an error in reasoning.

The biased fearful generalization occurs when the inference is based on a sample that is not representative, but this is overlooked due to fear. Psychologically, fear makes the sample feel representative enough to support the conclusion. For example, a person might look at arrest data about migrants and infer that most migrants are guilty of crimes. A strong generalization about what percentage of migrants commits crimes needs to include the entire population, not a sample consisting just of those arrested.

As another example, if someone terrified of guns looks at crime data about arrests involving firearms and infers that most gun owners are criminals, this would be a biased generalization. This is because those arrested for gun crimes do not represent the entire gun-owning population. A good generalization about what percentage of gun-owners commit crimes needs to include the general population, not just those arrested.

When considering any fallacy, there are three things to keep in mind. First, not everything that looks like a fallacy is a fallacy. After all, a good generalization has the same structure as a hasty or biased generalization. Second, concluding a fallacy must have a false conclusion is a fallacy (the fallacy fallacy). So, a biased or hasty generalization could have a true conclusion; but it would not be supported by the generalization. Third, a true conclusion does not mean that a fallacy is not a fallacy. For example, a hasty generalization could have a true conclusion—the problem lies in the logic, not the truth of the conclusion. For example, if I see one red squirrel in a forest and infer all the squirrels there are red, then I have made a hasty generalization, even if I turn out to be right. The truth of the conclusion does not mean that I was reasoning well. It is like a lucky guess on a math problem: getting the right answer does not mean that I did the math properly. But how does one neutralize the fearful generalization?

On the face of it, a fearful generalization would seem to be easy to neutralize. Just present the argument and consider the size and representativeness of the sample in an objective manner. The problem is that a fearful generalization is motivated by fear and fear impedes rationality and objectivity. Even if a fearful person tries to consider the matter, they might persist in their errors. To use an analogy, I have an irrational fear of flying. While I know that air travel is the safest form of travel this has no effect on my fear. Likewise, someone who is afraid of migrants or men might be able to do the math yet persist in their fearful conclusion. As such, a way of dealing with fearful generalizations would be the best way to deal with fear in general, but this goes beyond the realm of critical thinking and into the realm of virtue.

One way to try to at least briefly defuse the impact of fear is to try the method of substitution. The idea is to replace the group one fears with a group that one belongs too, likes or at least does not fear. This works best when the first premise remains true when the swap is made, otherwise the person can obviously reject the swap. This might have some small impact on the emotional level that will help a person work through the fear—assuming they want to. I will illustrate the process using Chad, a hypothetical Christian white male gun owner who is fearful of undocumented migrants (or illegals, if you prefer).

Imagine that Chad reasons like this:

 

Premise 1: Some migrants have committed violent crimes in America.

“Premise” 2: I (Chad) am afraid of migrants.

Conclusion: Many migrants are violent criminals.

 

As “critical thinking therapy” Chad could try swapping in one of his groups and see if his logic still holds.

 

Premise 1: Some white men have committed violent crimes in America.

“Premise” 2: I (Chad) am a white man.

Conclusion: Many white men are violent criminals.

 

Chad would agree that each argument starts with a true first premise, but Chad would presumably reject the conclusion of the second argument. If pressed on why this is the case, Chad would presumably point out that the statistical data does not support the conclusion. At this point, a rational Chad would realize that the same applies to the first argument as well. If this does not work, one could keep swapping in groups that Chad belongs to or likes until Chad is able to see the bias caused by his fear or one gets exhausted by Chad.

This method is not guaranteed to work (it probably will not), but it does provide a useful method for those who want to check their fears. Self-application involves the same basic process: swapping in your groups or groups you like in place of what you fear to see if your reasoning is good or bad.

 

110 Fallacies on Amazon

Also Known as: Appeal to Anecdote

Description:

This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy is also committed when someone rejects reasonable statistical data supporting a claim in favor of a single example or small number of examples that go against the claim. The fallacy is often considered a variation of Hasty Generalization. It has the following forms:

 

Form One

Premise 1:  Anecdote A is told about a member M (or small number of members) of Population P.

Premise 2: Anecdote A says that M is (or is not) C.

Conclusion: Therefore, C is (or is not) true of Population P.

 

Form Two

Premise 1: Reasonable statistical evidence S exists for general claim C.

Premise 2:  Anecdote A is presented that is an exception to or goes against general claim C.

Conclusion: General claim C is false

 

This fallacy is like Hasty Generalization in that a similar error is committed, namely drawing an inference based on a sample that is inadequate in size. One difference between Hasty Generalization and Anecdotal Evidence is that the fallacy of Anecdotal Evidence involves using a story (anecdote) as the sample. The more definitive distinction is that the second form of Anecdotal Evidence involves a rejection of statistical evidence for a general claim.

People often fall victim to this fallacy because stories and anecdotes usually have more psychological influence than statistical data. This persuasive force can cause people to infer that what is true in an anecdote must be true of the whole population or that an anecdote justifies rejecting statistical evidence. People often accept this fallacy because they would prefer that what is true in the anecdote be true for the whole population (a form of Wishful Thinking). For example, a person who smokes might try to convince herself that smoking will not hurt her because her Aunt Jane smoked 52 cigars a day and lived, cancer free, until she was 95.

People also fall for this fallacy when the anecdote matches their biases (positive or negative) or prejudices. For example, a person who fears and dislikes immigrants might believe that immigrants are likely to commit crimes because of an anecdote they hear about an immigrant who committed a crime. A person who has a very favorable view of immigrants might be swayed by an anecdote about an exceptional immigrant and infer that most immigrants will be exceptional.

As the example suggests, this sort of poor reasoning can be used in the context of causal reasoning. In addition to cases involving individual causation (such as Jane not getting cancer) this poor reasoning is commonly applied to causal claims about populations. What typically occurs is that a person rejects a general causal claim such as smoking causes cancer in favor of an anecdote in which a person smoked but did not get cancer. While this anecdote does show that not everyone who smokes gets cancer, it does not prove that smoking does not cause cancer.

This is because establishing that C is a causal factor for effect E in population P is a matter of showing that there would be more cases of E if all members of P were exposed to C than if none were. Showing that there are some anecdotal cases in which members of P were exposed to C but did not show effect E does not show that C does not cause E. In fact, that is what you should expect to see in most cases.

That said, the exceptions given in the anecdotes can provide a reason to be suspicious of a claimed causal connection, but this suspicion must be proportional to the evidence provided by the anecdote. For example, the fact that Alan Magee survived a fall of 20,000 feet from his B-17 bomber in WWII does show that a human can survive such a fall. However, it does not serve to disprove the general claim that falls from such great heights are usually fatal.

 Anecdotes can also provide the basis for additional research. For example, the fact that some people can be exposed to certain pathogens without getting sick suggests that they would be worth examining to see how their immunity works and whether this could benefit the general population. As another example, the fact that people do sometimes survive falls from aircraft does provide a reason for investigating how this works and how this information might be useful.

 

Defense: The defense against the first version of this fallacy is to keep in mind that an anecdote does not prove or disprove a general claim. It is especially important to be on guard against anecdotes that have strong persuasive force, such as one that are very vivid or nicely line up with biases.

For the second version, the person committing it will ironically raise the red flag for this fallacy. They will admit that they are rejecting statistical evidence in favor of an anecdote. In effect, they are telling you to believe the one piece of evidence they like in favor of the weight of evidence they dislike. To avoid inflicting this fallacy on yourself, be on guard against the tendency to confuse the psychological force of an anecdote with its logical force.

 

Example #1

Jane: “Uncle Bill smoked a pack a day since he was 11 and he lived to be 90. So, all that science and medical talk about smoking being bad is just a bunch of garbage.”

Example #2

John: “Oh no! That woman is bringing pit bull into the dog park! Everyone get their dogs and run away!”

Sally: “Oh, don’t worry. I know that people think that pit bulls are aggressive and that there are all these statistics about them being dangerous dogs.”

John: “Yeah, that is why I’m leaving before your monster kills my dog.”

Sally: “But look at how sweet my pit bull Lady Buttercup is—she has never hurt anyone. So, all that bull about pit bulls being aggressive is just that: bull.”

Example #3

Bill: “Hey Sally, you look a bit under the weather.”

Sally: “Yeah, I think I’m getting a cold. In the summer. In Florida. This sucks.”

Bill: “My dad and I almost never get colds. You should do what we do.”

Sally: “What is that?”

Bill: “Drink red wine with every meal. My dad said that is the secret to avoiding colds. When I got old enough to buy wine, I started doing it.”

Sally: “Every meal? Even breakfast?”

Bill: “Yes.”

Sally: “Red wine goes with donuts?”

Bill: “It pairs perfectly.”

Ted: “That is baloney. I know a guy who did that and he had colds all the time. Now, this other guy told me that having a slice of cheese with every meal keeps the colds away. I never saw him so much as sniffle.”

Sally: “Why not just have wine and cheese every meal?”

Example #4

Fred: “You are wasting time studying.”

George: “What? Why aren’t you studying? The test is going to be hard.”

Fred: “No need.”

George: “You’re not going to cheat, are you?”

Fred: “No, of course not! But I heard about this woman, Keisha. She aced the last test. She went to the movies and forgot to study. So, I’m going with the Keisha Method—I just need to pick a movie and my A is assured.”

Example #5

Tucker: “Did you hear that story about the immigrant who killed that student?”

Sally: “I did. Terrible.”

Tucker: “So, I bet you’ll change your stance on immigration. After all, they are coming here to commit crimes and endangering Americans.”

Sally: “The statistics show otherwise.”

Tucker: “That is your opinion. That murder shows otherwise.”

Example #5

Sally: “Did you hear that story about the immigrant who saved ten Americans and is now attending medical school and law school at the same time?”

Tucker: “I did. Impressive.”

Sally: “So, I bet you’ll change your stance on immigration. After all, they are amazing people who will do great things.”

 

This is from my book 110 Fallacies.

Also Known as: Misuse of Authority, Irrelevant Authority, Questionable Authority, Inappropriate Authority, Ad Verecundiam

Description:

The fallacious Appeal to Authority is a fallacy of standards rather than a structural fallacy. A fallacious Appeal to Authority has the same form as a strong Argument from Authority. As such, determining when this fallacy occurs is a matter of assessing an Argument from Authority to see if it meets the standards presented below. The general form of the reasoning is as follows:

 

Premise 1: Person A is (claimed to be) an authority on subject S.

Premise 2: Person A makes claim C about subject S.

Conclusion: Therefore, C is true.

 

This reasoning is fallacious when person A is not qualified to make reliable claims in subject S. In such cases the reasoning is flawed because the fact that an unqualified person makes a claim does not provide any justification for the claim. The claim could be true, but the fact that an unqualified person made the claim does not provide any rational reason to accept the claim as true.

When a person falls prey to this fallacy, they are accepting a claim as true without having adequate evidence. More specifically, the person is accepting the claim because they erroneously believe the person making the claim is an expert. Since people tend to believe people they think are authorities this fallacy is common one.

Since this sort of reasoning is fallacious only when the person is not a legitimate authority in a particular context, it is necessary to provide the standards/criteria for assessing the strength of this argument. The following standards provide a guide to such an assessment:

 

  1. The person has sufficient expertise in the subject matter in question.

Claims made by a person who lacks the needed degree of expertise to make a reliable claim are not well supported. In contrast, claims made by a person with the needed expertise will be supported by the person’s competence in the area.

Determining whether a person has the needed degree of expertise can be very difficult. In academic fields (such as philosophy, engineering, and chemistry), a person’s formal education, academic performance, publications, membership in professional societies, papers presented, awards won and so forth can all be reliable indicators of expertise. Outside of academic fields, other standards will apply. For example, having sufficient expertise to make a reliable claim about how to tie a shoelace only requires the ability to tie the shoelace. Being an expert does not always require having a university degree. Many people have high degrees of expertise in sophisticated subjects without having ever attended a university. Further, it should not be assumed that a person with a degree must be an expert.

What is required to be an expert is often a matter of debate. For example, some people claim expertise because of a divine inspiration or a special gift. The followers of such people accept such credentials as establishing the person’s expertise while others often see these self-proclaimed experts as deluded or even as charlatans. In other situations, people debate rationally over what sort of education and experience is needed to be an expert. Thus, what one person may take to be a fallacious appeal another person might take to be a well-supported line of reasoning.

  1. The claim being made by the person is within their area(s) of expertise.

A person making a claim outside of their area(s) of expertise should not be considered an expert in that area. So, that claim is not backed expertise and should not be accepted based on an Appeal to Authority.

Because of the vast scope of human knowledge, it is impossible for a person to be an expert on everything or even many things. So, an expert will only be an expert in certain subject areas. In most other areas they will have little or no expertise. Thus, it is important to determine what subject a claim falls under.

Expertise in one area does not automatically confer expertise in another area, even if they are related. For example, being an expert physicist does not make a person an expert on morality or politics. Unfortunately, this is often overlooked or intentionally ignored. In fact, advertising often rests on a violation of this condition. Famous actors and sports heroes often endorse products that they are not qualified to assess. For example, a person may be a famous actor, but that does not automatically make them an expert on cars or reverse mortgages.

  1. There is an adequate degree of agreement among the other experts in the subject in question.

If there is significant legitimate dispute between qualified experts, then it will be fallacious to make an Appeal to Authority using the disputing experts. This is because for almost any claim being made by one expert there will be a counterclaim made by another expert. In such cases an Appeal to Authority would tend to be futile. In such cases, the dispute must be settled by consideration of the issues under dispute. Since all sides in such a dispute can invoke qualified experts, the dispute cannot be rationally settled by an Argument from Authority.

There are many fields in which there is significant reasonable dispute. Economics, ethics, and law are all good examples of such disputed fields. For example, trying to settle an ethical issue by appealing to the expertise of one ethicist can easily be countered by pointing to an equally qualified expert who disagrees.

No field has complete agreement, and some degree of dispute is acceptable. How much is acceptable is, of course, a matter of debate. Even a field with a great deal of dispute might contain areas of significant agreement. In such cases, an Argument from Authority could be a good argument. For example, while philosophers disagree on most things, there is a consensus among the experts about basic logic. As such, appealing to the authority of an expert on logic in a matter of logic would generally be a strong Argument from Authority.

When it comes to claims that most of the qualified experts agree on, the rational thing for a non-expert to do is to accept that the claim is probably true. After all, a non-expert is not qualified to settle to question of which experts are correct and the majority of qualified experts is more likely to be right than the numerical minority. Non-experts often commit this fallacy because they wrongly think that because they prefer the claim of the minority of experts, it follows that those experts must be right.

  1. The person in question is not significantly biased.

If an expert is significantly biased, then the claims they makes will be less credible. So, an Argument from Authority based on a biased expert will tend to be fallacious. This is because the evidence will usually not justify accepting the claim.

Experts, being people, are vulnerable to biases and prejudices. If there is evidence that a person is biased in some manner that would affect the reliability of their claims, then an Argument from Authority based on that person is likely to be fallacious. Even if the claim is true, the fact that the expert is biased weakens the argument. This is because there would be reason to believe that the expert might not be making the claim because they have carefully considered it using their expertise. Rather, there would be reason to believe that the claim is being made because of the expert’s bias or prejudice.

No person is completely objective. At the very least, a person will be favorable towards their own views (otherwise they would not hold them). Because of this, some degree of bias must be accepted, provided it is not significant. What counts as a significant degree of bias is open to dispute and can vary a great deal from case to case. For example, many people would probably suspect that doctors who were paid by tobacco companies to research the effects of smoking would be biased while other people might believe (or claim) that they would be able to remain objective.

  1. The area of expertise is a legitimate area or discipline.

Certain areas in which a person may claim expertise may have no legitimacy or validity as areas of knowledge. Obviously, claims made in such areas tend to lack credibility.

What counts as a legitimate area of expertise can be difficult to determine. However, there are cases which are clear cut. For example, if a person claimed to be an expert at something they called “chromabullet therapy” and asserted that firing painted rifle bullets at a person would cure cancer it would not be unreasonable to accept their claim based on their “expertise.” After all, their expertise is in an area which has no legitimate content. The general idea is that to be a legitimate expert a person must have mastery over a real field or area of knowledge.

As noted above, determining the legitimacy of a field can often be difficult. In European history, various scientists had to struggle with the Church and established traditions to establish the validity of their disciplines. For example, experts on evolution faced an uphill battle in getting the legitimacy of their area accepted.

A modern example involves psychic phenomenon. Some people claim that they are certified “master psychics” and that they are experts in the field. Other people contend that their claims of being certified “master psychics” are simply absurd since there is no real content to such an area of expertise. If these people are right, then anyone who accepts the claims of these “master psychics” are victims of a fallacious Appeal to Authority.

  1. The authority in question must be identified.

A common variation of the typical Appeal to Authority fallacy is an Appeal to an Unnamed Authority. This fallacy is Also Known as an Appeal to an Unidentified Authority.

This fallacy is committed when a person asserts that a claim is true because an expert or authority makes the claim, but the person does not identify the expert. Since the expert is not identified, there is no way to tell if the person is an expert. Unless the person is identified and has his expertise established, there is no reason to accept the claim on this basis.

This sort of reasoning is not unusual. Typically, the person making the argument will say things like “I have a book that says…”, or “they say…”, or “the experts say…”, or “scientists believe that…”, or “I read in the paper..” or “I saw on TV…” or some similar statement. in such cases the person is often hoping that the listener(s) will simply accept the unidentified source as a legitimate authority and believe the claim being made. If a person accepts the claim simply because they accept the unidentified source as an expert (without good reason to do so), he has fallen prey to this fallacy.

 

Non-Fallacious Arguments from Authority

Not all Arguments from Authority are fallacious. This is fortunate since people must rely on experts. No one person can be an expert on everything, and people do not have the time or ability to investigate every single claim themselves.

In some cases, Arguments from Authority will be good arguments. For example, when a person goes to a skilled doctor and the doctor tells them that they have a cold, then the patient has good reason to accept the doctor’s conclusion. As another example, if a person’s computer is acting odd and their friend, who is a computer expert, tells them it is probably their hard drive then they have good reason to accept this claim.

What distinguishes a fallacious Appeal to Authority from a good Argument from Authority is that the argument effectively meets the six conditions discussed above.

In a good Argument from Authority, there is reason to believe the claim because the expert says the claim is true. This is because a qualified expert is more likely to be right than wrong when making claims within their area of expertise. In a sense, the claim is being accepted because it is reasonable to believe that the expert has tested the claim and found it to be reliable. So, if the expert has found it to be reliable, then it is reasonable to accept it as being true. Thus, the listener is accepting a claim based on the testimony of the expert.

It should be noted that even a good Argument from Authority is not an exceptionally strong argument. After all, a claim is accepted as true because a credible person says it is true. Arguments that deal directly with evidence relating to the claim itself will tend to be stronger.

 

Defense: The main defense against this fallacy is to apply the standards of the Argument from Authority when considering any appeal to authority important enough to be worth assessing. You should especially be on guard when you agree with the (alleged) expert and want to believe they are correct. While there are legitimate uses for claims by anonymous experts, the credibility of these claims rest on the expertise of the person reporting the claim. This is because the evidence for such a claim is the credibility and expertise of the person reporting it. That is, you are trusting that they are honestly reporting the claim and are qualified to assess that the anonymous expert is credible.

Example #1:

Bill: “I believe that abortion is morally acceptable. After all, a woman should have a right to her own body.”

Jane: ‘I disagree completely. Dr. Johan Skarn says that abortion is always morally wrong, regardless of the situation. He must be right, after all, he is a respected expert in his field.”

Bill: “I’ve never heard of Dr. Skarn. Who is he?”

Jane: “He’s that guy that won the Nobel Prize in physics for his work on cold fusion.”

Bill: “I see. Does he have any expertise in morality or ethics?”

Jane: “I don’t know. But he’s a world-famous expert, so I believe him.”

Example #2:

Kintaro: “I don’t see how you can consider Stalin to be a great leader. He killed millions of his own people, he crippled the Soviet economy, kept most of the people in fear and laid the foundations for the violence that is occurring in much of Eastern Europe.”

Dave: “Yeah, well you say that. However, I have a book at home that says that Stalin was acting in the best interest of the people. The millions that were killed were vicious enemies of the state and they had to be killed to protect the rest of the peaceful citizens. This book lays it all out, so it must be true.”

Example #3:

Actor: “I’m not a doctor, but I play one on the hit series ‘Bimbos and Studmuffins in the OR.’ You can take it from me that when you need a fast acting, effective and safe pain killer there is nothing better than MorphiDope 2000. That is my considered medical opinion.”

Example #4:

Sasha: “I played the lottery today and I know I am going to win something.”

Siphwe: “What did you do, rig the outcome?”

Sasha: “No, silly. I called my Super Psychic Buddy at the 1-900-MindPower number. After consulting his magic Californian Tarot deck, he told me my lucky numbers.”

Siphwe: “And you believed him?”

Sasha: “Certainly, he is a certified Californian Master-Mind Psychic. That is why I believe what he has to say. I mean, like, who else would know what my lucky numbers are?”

Example #5

Sam: “I’m going to get the Shingles vaccine based on my doctor’s advice.”

Ted: “Well, I saw this guy on YouTube who says that the vaccine has microchips in it. And that it causes autism.”

Sam: “Are they are doctor or scientist?”

Ted: “Well, I think he was a doctor once. He said something about getting his medical license revoked because They are out to get him and want to silence him.”

Sam: “Does he have any evidence for these claims?”

Ted: “Look, you can believe your doctor if you want, but don’t come crying to me when the microchips take over your brain and you catch autism.”

Sam: “You don’t catch autism.”

Ted: “Whatever.”