Negativity bias is the tendency to give more weight to the negative than the positive. For example, people tend to weigh wrongs done to them more heavily than the good deeds done for them. As another example, people tend to be more swayed by negative political advertisements than by positives ones. This bias can also have an impact on education.

Some years ago, one of my colleagues always asked his logic students if they planned to attend law school. When he noticed a dramatic decline in logic students planning on law school, curiosity led him to investigate. He found that logic had been switched from being a requirement for pre-law to just recommended. Back then, my colleague said it seemed irrational for students who planned on taking the LSAT to avoid the logic class, given that the LSAT was largely a logic test and that law school requires logical reasoning.  From his philosophical soap box, he said that students prefer to avoid the useful when it is not required and only grudgingly take what is required. We discussed how this relates to the negativity bias.  A student who did not take the logic class when it was required would be punished by being unable to graduate. When the class became optional, there remained only the positive benefits of taking the class. Since people weigh punishments more than rewards, this behavior made sense—but still seemed irrational. Especially since many of the students who skipped the logic class ended up paying for LSAT preparation classes to spackle over their lack of logic skills.

Over the years, I have seen a similar sort of thing in my own classes. My university had a policy that allowed us to lower a student’s grade if they missed too many classes. While attendance has always been required in my classes, I have never inflicted a punishment for missing class. Not surprisingly, when the students figure this out, attendance plummets. Before I started using Blackboard and Canvas for coursework, attendance would increase dramatically on test days. Now that all work can be done on Canvas (a relic of COVID), attendance remains consistently low. Oddly, students often say my classes are interesting and useful. But, since there is no direct and immediate punishment for not attending (just a delayed “punishment” in terms of lower grades and a lack of learning), many students are not motivated to attend class.

I do consider I might be a bad professor or that most students see philosophy courses as useless or boring. However, my evaluations are consistently good, former students have returned to say good things about me and my classes, and so on. That said, perhaps I am deluding myself and being humored. That said, it is easy enough to draw an analogy to exercise: exercise does not provide immediate rewards and there is no immediate punishment for not staying fit—just a loss of benefits. Most people elect to avoid exercise. This and similar things show that people often avoid that which is difficult now but yields lasting benefits latter.

I have, of course, often considered adopting the punishment model for my classes. However, I have resisted this for a variety of reasons. The first is my personality: I am inclined to offer benefits rather than punishments. This is an obvious flaw given the general psychology of people. The second is that I believe in free choice: like God, I think people should be free to make bad choices and not be coerced into doing what is right. It must be a free choice. Naturally, choosing poorly brings its own punishment—albeit later. The third is the hassle of dealing with attendance: the paperwork, having to handle excuses, hearing poorly crafted lies, and so on. The fourth is the that classes are generally better for the good students when people who do not want to be there elect to do something else. The fifth is my moral and religious concern for my students: if they are not punished for missing classes, there is no reason to lie to me about what they missed. Finally, COVID changed things and if I punished students for not attending, too many students would end up failing simply because of not attending enough.

I did consider adopting the punishment model for three reasons. One is that if students are compelled to attend, they might learn something and I do worry that by not compelling them, I am doing them a disservice. The second is that this model is a lesson for what the workplace will be like for most of the students—so habituating them to this (or, rather, keeping the habituation they should have acquired in K-12) could be valuable. After all, they will probably need to endure awful jobs until they retire or die. The third is that perhaps people must be compelled by punishment—this is, of course, the model put forth by thinkers like Aristotle and Hobbes. But I will almost certainly stick with my flawed approach until I retire.

As a runner, I have been accused of being a masochist or at least possessing masochistic tendencies. As I routinely subject myself to pain and my previous essay about running and freedom was pain focused, this is hardly surprising. Other runners, especially those masochistic ultra-marathon runners, are often accused of masochism.

In some cases, the accusation is not serious. Usually, people just observe that runners do things that both hurt and make little sense to nonrunners. However, some see runners as masochists in a strict sense. Being a runner and a philosopher, I find this interesting, especially when I am the one accused of being a masochist.

Some do accuse runners of being masochists with some seriousness. While some say runners are masochists in jest or with some respect for the toughness of runners, it is sometimes presented as an accusation: that there is something wrong with runners and running is deviant behavior. While runners do like to joke about being odd and different, we probably prefer to not be seen as mentally ill deviants. After all, that would indicate that we are doing something wrong—which I believe is (usually) not the case. Based on my experience and meeting thousands of runners, I think that runners are generally not masochists.

Given that runners engage in painful activities (such as speed work and racing marathons) and that they often run despite injuries, it is tempting to believe they are masochists and that I am in denial about our collective deviance.

While this does have some appeal, it rests on a confusion about masochism in terms of means and ends. For the masochist, pain is a means to the end of pleasure. The masochist does not seek pain for the sake of pain, but seeks pain to achieve pleasure. However, there is a special connection between the means of pain and the end of pleasure: for the masochist, the pleasure they desire is that which is generated specifically by pain. While a masochist can get pleasure by other means (such as drugs, cake or drug cakes), it is the desire for pleasure caused by pain that defines the masochist. So, the pain is not optional—mere pleasure is not the end, but pleasure caused by pain.

This is different from those who endure pain as part of achieving an end, be that end pleasure or some other end. For those who endure pain to achieve an end, the pain can be part of the means or, more accurately, as an effect of the means. It is valuing the end that causes the person to endure the pain to achieve the end—the pain is not sought out as being the “proper cause” of the end. In the case of the masochist, the pain is not endured to achieve an end—it is the “proper cause” of the end, which is pleasure.

In the case of running, runners usually see pain as something to be endured as part of the process of achieving their desired ends, such as fitness or victory. However, runners usually prefer to avoid pain when they can. For example, while I endure pain to run a race, I prefer running with as little pain as possible. This is like a person putting up with the unpleasant aspects of a job to make money—but they would prefer as little unpleasantness as possible. After all, she is in it for the money, not the unpleasant aspects of work. Likewise, a runner is typically running for some other end (or ends) than hurting herself.  It just so happens that achieving that end (or ends) requires doing things that cause pain.

In my essay on running and freedom, I described how I endured pain while running the Tallahassee Half Marathon. If I were a masochist, experiencing pleasure by means of that pain would have been my primary end. However, my primary end was to run the half marathon well and the pain was an obstacle to that end. As such, I would have been glad to have had a painless start and I was pleased when the pain diminished. I enjoy the running and I do enjoy overcoming pain, but I do not enjoy the pain itself—hence the aspirin in my medicine cabinet.

While I cannot speak for all runners, my experience is that runners do not run for pain, they run despite the pain. Thus, we are not masochists. We might, however, show some poor judgment when it comes to pain and injury—but that is another matter. But I would suggest to any masochists that they do take up running, as running is really good for a person.

In the previous essay I discussed how to assess experts. While people argue based on the views of experts, they also make arguments based on studies (and experiments). While using a study in an argument is reasonable, making a good argument based on a study requires being able to rationally assess studies.

Not surprisingly, people often select the studies they believe based on fallacious reasoning. One erroneous approach is to favor a study simply because it agrees with what one already believes. The mistake is that to infer a study is right because I believe the results gets things backwards. It should be first established that study is plausible, then it is reasonable for me to believe it.

Another erroneous approach is to accept a study as correct because one wants it to be so. For example, a liberal might accept a study that claims to prove that liberals are smarter and more generous than conservatives.  This sort of “reasoning” is the classic fallacy of wishful thinking. Wishing that something is true (or false) does not prove that the claim is true (or false).

Sometimes people attempt DIY “studies” by appealing to their own anecdotes. For example, someone might claim that poor people are lazy based on an experience with some poor person. While anecdotes can be interesting, taking an anecdote as evidence is to fall victim to the classic fallacy of anecdotal evidence.

While fully assessing a study requires expertise in the relevant field, non-experts can still make rational evaluations. The following provides a concise guide to evaluating studies and experiments.

In normal talk, people often jam together studies and experiments. While this is fine for informal purposes, the distinction is important. A properly done controlled cause-to-effect experiment is the gold standard of research, although it is not always a viable option.

The objective of such an experiment is to determine the effect of a cause and this is done by the following general method. First, a random sample is selected from the population. Second, the sample is split into two groups: the experimental group and the control group. The two groups need to be as alike as possible and the more alike the two groups, the better the experiment.

The experimental group is then exposed to the causal agent while the control group is not. Ideally, that should be the only difference between the groups. The experiment then runs its course, and the results are examined to determine if there is a statistically significant difference between the two. If there is such a difference, then it is reasonable to infer that the causal factor brought about the difference.

Assuming that the experiment was conducted properly, whether the results are statistically significant depends on the size of the sample and the difference between the control group and experimental group. The idea is that experiments with smaller samples are less able to reliably capture effects. As such, when considering whether an experiment shows there is a causal connection it is important to know the size of the sample. After all, the difference between the experimental and control groups might be large but not significant. For example, imagine an experiment that involves 10 people. 5 people get a diet drug (experimental group) while 5 do not (control group). Suppose that those in the experimental group lose 30% more weight than those in the control group. While this might seem impressive, it is not statistically significant: the sample is so small, the difference could be due entirely to chance.

While the experiment is the gold standard, there are cases in which it would be impractical, impossible or unethical to conduct an experiment. For example, exposing people to pathogens to test their effects would be immoral. In such cases studies are used rather than experiments.

One type of study is the Nonexperimental Cause-to-Effect Study. Like the experiment, it is intended to determine the effect of a cause. The difference between the experiment and this sort of study is that those conducting the study do not expose the experimental group to the suspected cause. Rather, those selected for the experimental group were exposed to the suspected cause by their own actions or by circumstances. For example, a study of this sort might include people who were exposed to pathogens by accident. A control group is then matched to the experimental group and, as with the experiment, the more alike the groups are, the better the study.

After the study has run its course, the results are compared to see if there is a statistically significant difference between the two groups. As with the experiment, merely having a large difference between the groups need not be statistically significant.

Since a study of this sort relies on using an experimental group that was exposed to the suspected cause by the actions of those in the group or by circumstances, the study is weaker (less reliable) than the cause to effect experiment. After all, in the study the researchers must take what they can find rather than conducting a proper experiment.

In some cases, what is known is the effect and what is not known is the cause. For example, we might know that there is a new illness but not know what is causing it. In these cases, a Nonexperimental Effect-to-Cause Study can be used to try to sort things out.

Since this is a study rather than an experiment, those in the experimental group were not exposed to the suspected cause by those conducting the study. In fact, the cause is not known, so those in the experimental group are those showing the effect.

Since this is an effect-to-cause study, the effect is known, but the cause must be determined. This is done by running the study and determining if there is a statistically significant suspected causal factor. If such a factor is found, then that can be tentatively taken as a causal factor—one that will probably require additional study. As with the other study and experiment, the statistical significance of the results depends on the size of the study, which is why a study of adequate size is important.

Of the three methods, the effect-to-cause study is the weakest (least reliable). One reason for this is that those showing the effect might be different in important ways from the rest of the population. For example, a study that links cancer of the mouth to chewing tobacco would involve the complication that those who chew tobacco are often ex-smokers. As such, smoking might be the actual cause rather than the chewing. To sort this out would involve a study involving chewers who are not ex-smokers.

It is also worth referring to my essay on experts—when assessing a study, it is also important to consider the quality of the experts conducting the study. If those conducting the study are biased, lack expertise, and so on, then the study would be less credible. If those conducting it are proper experts, then that increases the credibility of the study.

As a final point, there is also a reasonable concern about psychological effects. If an experiment or study involves people, what people think can influence the results. For example, if an experiment is conducted and one group knows it is getting pain medicine, the people might be influenced to think they are feeling less pain. To counter this, the common approach is a blind study/experiment in which the participants do not know which group they are in, often by the use of placebos. For example, an experiment with pain medicine would include “sugar pills” for those in the control group.

Those conducting the experiment can also be subject to psychological influences—especially if they have a stake in the outcome. As such, there are studies/experiments in which those conducting the research do not know which group is which until the end. In some cases, neither the researchers nor those in the study/experiment know which group is which—this is a double-blind experiment/study.

Overall, here are some key questions to ask when assessing a study:

 

  • Was the study/experiment properly conducted?
  • Was the sample size large enough?
  • Were the results statistically significant?
  • Were those conducting the study/experiment experts?

The argument from authority is a weak, but useful argument if used correctly. While people rarely follow the “strict” form of the argument, using it is to infer that a claim is true based on the (alleged) expertise of the person making the claim. Unlike deductive logic, the quality of an argument from authority does not depend on its logical structure but the quality of the expert making the claim. As a practical matter, anyone could be used as an “expert” in an argument from authority.  For example, someone might claim that secondhand smoke does not cause cancer because Michael Crichton claimed that it does not. At least he did before he died. As another example, someone might claim that astral projection/travel is real because Michael Crichton also claims it can occur. Given that people often disagree, it is also quite common to find that alleged experts disagree with each other. For example, most medical experts claim that secondhand smoke does cause cancer. They do not, of course, claim that everyone who is exposed to it will get cancer or that no one who is not exposed to it will not get cancer. This is a claim about causation in populations: if everyone was exposed to secondhand smoke, then there would be more cases of cancer than if no one was.

If you are an expert in a field, you can pick between the other experts by using your own expertise. For example, a medical doctor who is trying to decide whether to believe that secondhand smoke causes cancer can examine the literature and perhaps even conduct her own studies. Being an (actual) expert means being qualified to make an informed pick. An obvious problem is, of course, that experts pick different experts to accept as being correct.

The problem is far greater when it involves non-experts trying to pick between experts (and perhaps alleged experts). Being non-experts, they lack the expertise to make informed choices about which expert is most likely to be right. This raises the question of how to pick between experts when you are not an expert.

Not surprisingly, people tend to pick based on fallacious reasoning. One approach is to pick an expert because she agrees with what you already believe. This is not good reasoning: to infer that something is true simply because I believe it gets things backwards. It should be first established that a claim is probably true, then it should be believed (with appropriate reservations).

Another common approach is to believe an expert because they make a claim you want to be true. For example, a smoker might elect to believe someone who claims secondhand smoke does not cause cancer because he does not want to believe he might increase the chance that his kids will get cancer. This “reasoning” is the fallacy of wishful thinking. Obviously enough, wishing that something is true (or false) does not prove that the claim is true (or false).

People also pick their expert based on qualities they see as positive but that are irrelevant to the person’s (logical) credibility. Factors such as height, gender, appearance, age, personality, religion, political party, wealth, friendliness, backstory, courage, and so on can influence people emotionally, but are not relevant to assessing a person’s expertise.  For example, a person giving you advice about warts might be very likeable but be completely wrong about how warts should be treated.  

Fortunately, there standards for recognizing an expert. They are as follows.

 

  1. The person has sufficient expertise in the subject matter in question.

Claims made by a person who lacks the needed degree of expertise to make a reliable claim will not be well supported. In contrast, a person who has expertise in a subject is more likely to be right about claims in their field. The challenge is being able to judge whether a person has sufficient expertise. In general, the question is whether a person has the relevant qualities and these are assessed in terms of such factors as education, experience, reputation, accomplishments and positions.

 

  1. The claim being made by the person is within her area(s) of expertise.

If a person makes a claim about a subject outside of their expertise, then their expertise does not apply. Hence, the claim is not backed by the expertise and is not reliable. People often mistake expertise in one area (or being a celebrity) for expertise in another area. For example, an expert physicist’s claims about politics or ethics are not backed up by their expertise in physics. A person can be an expert in more than one field and there are cases where expertise in one field can be relevant in another. For example, a physicist who is also a professional ethicist would be an expert in both fields. As another example, a physicist’s expertise in nuclear weapons could be relevant to claims made in politics or ethics about nuclear weapons.

 

  1. The claims made by the expert are consistent with the views of the majority of qualified experts in the field.

This is a very important factor. As a rule, a claim that is held as correct by the majority of qualified experts in the field is the most plausible claim. The majority of experts are more likely to be right than those who disagree with the majority.

As no field has complete agreement, a degree of dispute is acceptable. How much is acceptable is, of course, a matter of serious debate.

It is also important to be aware that the majority could be wrong. That said, it is reasonable for non-experts to go with the majority opinion because non-experts are, by definition, not experts. If I am not an expert in a field, I would be hard pressed to justify picking the expert I happen to like or agree with against the view of the majority of experts.

 

  1. The person in question is not significantly biased.

 Experts, being people, are subject to biases and prejudices.  If someone is biased  in a way that would affect the reliability of their claims, then their credibility is reduced. This is because there would be reason to believe that the claim is being made because of bias or prejudice. For example, an expert being paid by an oil company who claims that fossil fuels are not causing climate change would be biased. But a biased expert’s claims could still be correct.

No one  is completely objective and a person will favor their own views. Because of this, some degree of bias must be accepted, provided that the bias is not significant. What counts as a significant degree of bias is open to dispute and can vary from case to case. For example, most would suspect that researchers who receive funding from pharmaceutical companies will be biased while others might claim that the money would not sway them if the drugs proved ineffective or harmful.

Disagreement over bias can itself be a significant dispute. For example, those who doubt that climate change is real often assert that the climate experts are biased. Questioning an expert based on potential bias is a legitimate approach—if there is adequate evidence of bias that would be strong enough to unduly influence them One way to look for bias is to consider whether the expert is interested or disinterested. Or, more metaphorically, to consider whether they have “skin in the game” and stand to gain (or suffer a loss) from their claim being accepted as true. Merely disagreeing with an expert is, obviously, not proof that an expert is biased. Vague accusations that the expert has “liberal” or “conservative” views also do not count as adequate evidence. What is needed is actual evidence of bias. Anything else is most likely a mere ad homimen attack.

These standards are not infallible. However, they do provide a guide to logically picking an expert to believe. They are certainly more logical than just picking the one who says things one likes.

 

 

Doubling down occurs when a person is confronted with evidence against a belief and their belief, rather than being weakened, is strengthened.A plausible explanation of doubling down rests on Leon Festinger’s classic theory of cognitive dissonance. When a person has a belief that is threatened by evidence, she has two main choices. The first is to adjust her belief in accord with the evidence. If the evidence is plausible and strongly supports the inference that the belief is false, then it is rational to reject the old belief. If the evidence is not plausible or does not strongly support the inference that the belief is false then it is rational to stick with the threatened belief on the grounds that the threat is not much of a threat.

Assessment of what is plausible evidence can be problematic. In general terms, assessing evidence involves considering how it matches one’s own observations, one’s background information, and credible sources. This assessment can push the matter back: the evidence for the evidence also needs to be assessed, which fuels classic skeptical arguments about the impossibility of knowledge. Every belief must be assessed, which leads to an infinite regress, thus making knowing whether a belief is true impossible. Naturally, retreating into skepticism will not help when a person is responding to evidence against a beloved belief (unless the beloved belief is a skeptical one)—the person wants her beloved belief to be true. As such, someone defending a beloved belief needs to accept that there is some support for the belief—even if the basis is faith or revelation.

In terms of assessing the reasoning, this is objective if it is deductive logic.  Deductive logic is such that if an argument is doing what it is supposed to do (be valid), then if the premises are true, then the conclusion must be true. Deductive arguments can be assessed by such things as truth tables, Venn diagrams and proofs; thus, the reasoning is objectively good or bad. Inductive reasoning is different. While the premises of an inductive argument are supposed to support the conclusion, inductive arguments are such that true premises only make (at best) the conclusion likely to be true. Inductive arguments vary in strength and while there are standards for assessing them, reasonable people can disagree about the strength of an inductive argument. People can also embrace skepticism here, specifically the problem of induction: even when an inductive argument has all true premises and the reasoning is as good as inductive reasoning gets, the conclusion could still be false. The obvious problem with trying to defend a beloved belief with the problem of induction is that it also cuts against the beloved belief—while any inductive argument against the belief could have a false conclusion, so could any inductive argument for it. As such, a person who wants to hold to a beloved belief in a way that is justified would seem to need to accept argumentation. Naturally, a person can embrace other ways of justifying beliefs—the challenge is showing that these ways should be accepted. This would seem, ironically, to require argumentation.

A second option is to reject the evidence without honestly assessing it and rationally considering the logic of the arguments. If a belief is very important to a person, perhaps even central to her identity, then the cost of giving up the belief would be very high. If the person thinks (or feels) that the evidence and reasoning cannot be engaged fairly without risking the belief, then the person can reject the evidence and reasoning using various techniques of self-deception and bad logic (fallacies serve well here).

 This rejection has less psychological cost than engaging the evidence and reasoning but is not always consequence free. Since the person probably has some awareness of their self-deception, it needs to be psychologically “justified”, and this results in a strengthening of the commitment to the belief. There are many cognitive biases that help here, such as confirmation bias (seeking, interpreting, and remembering information to confirm existing beliefs) and other forms of motivated reasoning. These can be hard to defend against, since they derange the very mechanisms that are needed to avoid them.

One interesting way people “defend” beliefs is by categorizing the evidence against the beliefs and opposing arguments as unjust attacks, which strengthens their resolve in the face of perceived hostility. After all, people fight harder when they believe they are under attack. Some people even infer they must be right because they are being criticized. As they see it, if they were not right, people would not be trying to show that they are in error. One variation of this is when a person claims they must be right because everyone disagrees with them.

People also, as John Locke argued in his work on enthusiasm, take the strength of their feelings about a belief as evidence for its truth. When people are challenged, they often feel angry and this makes them feel even more strongly. Hence, when they “check” on the truth of the belief using the measure of feeling, they feel even stronger that it is true. However, how they feel about it (as Locke argued) is no indication of its truth. Or falsity.

As a closing point, one intriguing rhetorical tactic is to accuse a person who disagrees with you of being the one who is doubling down. This accusation, after all, comes with the insinuation that the person is irrationally holding to a false belief. A reasonable defense is to show that evidence and arguments are being used to support a belief. The unreasonable counter is to employ the very tactics of doubling down and refuse to accept such a response. That said, it is worth considering that one person’s double down is often another person’s considered belief. Or, as it might be put, I support my beliefs with facts and logic while my opponents double down.

In the previous essay on threat assessment, I looked at the influence of availability heuristics and fallacies related to errors in reasoning about statistics and probability. This essay continues the discussion by exploring the influence of fear and anger on threat assessment.

A rational assessment of a threat involves properly considering how likely it is that a threat will occur and, if it occurs, how severe the consequences might be. As might be suspected, the influence of fear and anger can cause people to engage in poor threat assessment that overestimates the likelihood or severity of a threat.

One starting point for anger and fear is the stereotype. Roughly put, a stereotype is an uncritical generalization about a group. While stereotypes are generally thought of as being negative (that is, attributing undesirable traits such as laziness or greed), there are also positive stereotypes. They are not positive in that the stereotyping itself is good. Rather, the positive stereotype attributes desirable qualities, such as being good at math or skilled at making money. While it makes sense to think that stereotypes that provide a foundation for fear would be negative, they often include a mix of negative and positive qualities. For example, a feared group might be cast as stupid and weak, yet somehow also incredibly cunning and dangerous.

Stereotyping leads to similar mistakes that arise from hasty generalizations in that reasoning about a threat based on stereotypes will often result in errors. The defense against a stereotype is to seriously inquire whether the stereotype is true or not.

Stereotyping is useful for demonizing. Demonizing, in this context, involves unfairly portraying a group as evil and dangerous. This can be seen as a specialized form of hyperbole in that it exaggerates the evil of the group and the danger it represents. Demonizing is often combined with scapegoating—blaming a person or group for problems they are not responsible for. A person can demonize on their own or be subject to the demonizing rhetoric of others.

Demonizing presents a clear threat to rational threat assessment. If a group is demonized successfully, it will be (by definition) seen as eviler and more dangerous than it really is. As such, both the assessment of the probability and severity of the threat will be distorted. For example, the demonization of Muslims by various politicians and pundits distorts threat assessments.

The defense against demonizing is like the defense against stereotypes—a serious inquiry into whether the claims are true. It is worth noting that what might seem to be demonizing might be an accurate description. This is because demonizing is, like hyperbole, exaggerating the evil of and danger presented by a group. If the description is true, then it would not be demonizing. Put informally, describing a group as evil and dangerous need not be demonizing. For example, descriptions of Isis as evil and dangerous were generally accurate. As are descriptions of evil and dangerous billionaires.  

While stereotyping and demonizing are rhetorical devices, there are also fallacies that distort threat assessment. Not surprisingly, one is scare tactics (also known as appeal to fear). This fallacy involves substituting something intended to create fear in the target in place of evidence for a claim. While scare tactics can be used in other ways, it can be used to distort threat assessment. One aspect of its distortion is the use of fear—when people are afraid, they tend to overestimate the probability and severity of threats. Scare tactics is also used to feed fear—one fear can be used to get people to accept a claim that makes them even more afraid.

One thing that is especially worrisome about scare tactics in the context of terrorism is that in addition to making people afraid, it is also routinely used to “justify” encroachments on rights, massive spending, and the abandonment of moral values. While courage is an excellent defense against this fallacy, asking two important questions also helps. The first is to ask, “should I be afraid?” and the second is to ask, “even if I am afraid, is the claim actually true?” For example, scare tactics has been used to “support” the claim that refugees should not be allowed into the United States. In the face of this tactic, one should inquire whether or not there are grounds to be afraid of refugees and also inquire into whether or not an appeal to fear justifies banning refugees.

It is worth noting that just because something is scary or makes people afraid it does not follow that it cannot serve as legitimate evidence in a good argument. For example, the possibility of a fatal head injury from a motorcycle accident is scary but is also a good reason to wear a helmet. The challenge is sorting out “judgments” based merely on fear and judgments that involve good reasoning about scary things.

While fear makes people behave irrationally, so does anger. While anger is an emotion and not a fallacy, it does provide the fuel for the appeal to anger fallacy. This fallacy occurs when something that is intended to create anger is substituted in place of evidence for a claim. For example, a demagogue might work up a crowd’s anger at illegal migrants to get them to accept absurd claims about building a wall along a massive border.

Like scare tactics, the use of an appeal to anger distorts threat assessment. One aspect is that when people are angry, they tend to reason poorly about the likelihood and severity of a threat. For example, a crowd that is enraged against illegal migrants might greatly overestimate the likelihood that the migrants are “taking their jobs” and the extent to which they are “destroying America.” Another aspect is that the appeal to anger, in the context of public policy, is often used to “justify” policies that encroach on rights and do other harms. For example, when people are angry about a mass shooting, proposals follow to limit gun rights that had no relevance to the incident in question. As another example, the anger at illegal migrants is often used to “justify” policies that will harm the United States. As a third example, appeals to anger are often used to justify policies that would be ineffective at addressing terrorism and would do far more harm than good.

It is important to keep in mind that if a claim makes a person angry, it does not follow that the claim cannot be evidence for a conclusion. For example, a person who learns that her husband is having an affair with an underage girl would probably be very angry. But this would also serve as good evidence for the conclusion that she should report him to the police and divorce him. As another example, the fact that illegal migrants are here illegally and knowingly employed by businesses because they can be more easily exploited than American workers can make someone mad, but this can also serve as a premise in a good argument in favor of enforcing (or changing) the laws.

One defense against appeal to anger is good anger management skills. Another is to seriously inquire into whether there are grounds to be angry and whether any evidence is offered for the claim. If all that is offered is an appeal to anger, then there is no reason to accept the claim based on the appeal.

The rational assessment of threats is important for practical and moral reasons. Since society has limited resources, rationally using them requires considering the probability of threats rationally—otherwise resources are being misspent. There is also the concern about the harm of creating fear and anger that are unfounded. In addition to the psychological harm to individuals that arise from living in fear and anger, there is also the damage stereotyping, demonizing, scare tactics and appeal to anger do to society. While anger and fear can unify people, they most often unify by dividing—pitting us against them. I urge people to think through threats rather than giving in to the seductive demons of fear and anger.

When engaged in rational threat assessment, there are two main factors that need to be considered. The first is the probability of the threat. The second is the severity of the threat. These two can be combined into one sweeping question: “how likely is it that this will happen and, if it does, how bad will it be?”

Making rational decisions about dangers involves considering both factors. For example, consider the risks of going to a crowded area such as a movie theater or school. There is a high probability of being exposed to the cold virus, but it is (for most people) not a severe threat. There is a low probability that there will be a mass shooting on my campus, but it is a high severity threat.

Our survival as a species seems to have been despite our poor skills at rational threat assessment. To be specific, the worry people feel about a threat generally does not match up with the probability of the threat occurring. People seem somewhat better at assessing severity, though we often get this wrong.

One excellent example of poor threat assessment is the fear Americans have about terrorism.  Between 1975 and 2025 3,577 Americans died as the result of terrorism, which accounted for .35% of all murders in the US in that time. If you are in the United States now, your odds of being killed in such an attack is about 1 in 4 million per year. This includes all forms of terrorism, although you would now be statistically most likely to be killed by right-wing terrorists.

While being killed by terrorists in the United States is unlikely, some people are terrified by the possibility (which is, of course, the goal of terrorism). Given that an American is more likely to be killed while driving than by a terrorist, it might be wondered why people are so bad at threat assessment. The answer, in terms of feeling fear vastly out of proportion to probability, involves a cognitive bias and some classic fallacies.

People (probably) follow general rules when they estimate probabilities and the ones we use unconsciously are called heuristics. While the right way to estimate probability is to use statistical methods, people often fall victim to the availability heuristic. This is when a person unconsciously assigns a probability based on how often they think of something. While something that occurs often is likely to be thought of often, thinking of something more often does not make it more likely to occur.

After an act of terrorism, people think about terrorism more often and tend to unconsciously believe that the chance of terror attacks occurring is higher than it really is. To use a non-terrorist example, when people hear about a shark attack, they tend to think that the chances of it occurring are high—even though the probability is low (driving to the beach is much more likely to kill you). The defense against this bias is to find reliable statistical data and use that as the basis for inferences about threats—that is, think it through rather than trying to feel through it. This is, of course, difficult: people tend to regard their feelings, however unwarranted, as the best evidence—despite usually being the worst evidence.

People are also misled about probability by fallacies. One is the spotlight fallacy. The spotlight fallacy is committed when a person uncritically assumes that all (or many) members or cases of a certain type are like those that receive the most attention or coverage in the media. After an incident involving terrorists who are Muslim, media attention will focus on that fact, often leading people who are poor at reasoning to infer that most Muslims are terrorists. This is the exact sort of mistake that would occur if it were inferred that most veterans are terrorists because the media covered a terrorist who was a military veteran. If people believe that, for example, most Muslims are terrorists, then they will make incorrect inferences about the probability of a terrorist attack by Muslims in the United States. This is distinct from someone simply lying about, for example, Muslims and claiming they are terrorists because the person is a bigot or wants to exploit the fear they create.

Anecdotal evidence is another fallacy that contributes to poor inferences about the probability of a threat. This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy also occurs when someone rejects reasonable statistical data supporting a claim in favor of one example or small number of examples that go against the claim. This fallacy is like hasty generalization and a similar sort of error is committed, namely drawing an inference based on a sample that is inadequate in size relative to the conclusion. The main difference between hasty generalization and anecdotal evidence is that the fallacy anecdotal evidence involves using a story (anecdote) as the sample. Out in the wild, it can be difficult to tell whether a fallacy is a hasty generalization or anecdotal evidence, fortunately what matters is recognizing a fallacy is a fallacy even if it is not clear which one it is.

People fall victim to this fallacy because stories and anecdotes usually have more emotional and psychological impact than statistical data. This leads people to infer that what is true in an anecdote must be true of the whole population or that an anecdote justifies rejecting statistical evidence in favor of an anecdote. Not surprisingly, people most often accept this fallacy because they want to believe that what is true in the anecdote is true for the whole population.

In the case of terrorism, people use both anecdotal evidence and hasty generalization: they point to a few examples of terrorism or tell a story about a specific incident, and then draw an unwarranted conclusion about the probability of a terrorist attack occurring in the United States. For example, people point out that terrorists have masqueraded as refugees and infer that refugees in general present a major threat to the United States. Or they might tell the story about one attacker in San Bernardino who arrived in the states on a K-1 (“fiancé”) visa and make unwarranted conclusions about the danger of the entire visa system.

One last fallacy is misleading vividness. This occurs when a very small number of particularly dramatic events are taken to outweigh statistical evidence. This sort of “reasoning” is fallacious because the mere fact that an event is exceptionally vivid or dramatic does not make the event more likely to occur, especially in the face of statistical evidence to the contrary.

People often accept this sort of “reasoning” because particularly vivid or dramatic cases usually make a very strong impression on the mind. For example, mass shootings are vivid and awful, so it is hardly surprising that people often feel they are very much in danger from such attacks. Another way to look at this fallacy in the context of threats is that a person conflates the severity of a threat with its probability. That is, the worse the harm, the more likely a person feels that it will occur. But the vividness of a harm has no connection to the probability it will occur.

That said, considering the possibility of something dramatic or vivid occurring is not always fallacious. For example, a person might decide to never go sky diving because hitting the ground because of a failed parachute would be very dramatic. If he knows that, statistically, the chances of the accident happening are very low but he considers even a small risk unacceptable, then he would not be committing this fallacy. This then becomes a matter of value judgment—how much risk a person is willing to tolerate relative to the severity of the potential harm.

The defense against these fallacies is to use a proper statistical analysis as the basis for inferences about probability. As noted above, there is still the psychological problem: people tend to act on the basis on how they feel rather than what the facts show.

Such rational assessment of threats is important for both practical and moral reasons. The matter of terrorism is no exception to this.  Since society has limited resources, rationally using them requires considering the probability of threats rationally—otherwise resources are being misspent. For example, spending billions to counter an unlikely threat while spending little on major causes of harm would be irrational (if the goal is to protect people from harm). There is also the concern about the harm of creating unfounded fear. In addition to the psychological harm to individuals, there is also the damage to the social fabric. While creating unwarranted fear is useful for grifters, pundits and politicians, it is bad for the rest of us and thinking things through is a way to protect yourself from needless fear and those who wish to exploit it.  

In Art of the Deal Donald Trump calls one of his rhetorical tools “truthful hyperbole.” He defends and praises it as “an innocent form of exaggeration — and a very effective form of promotion.” As a promoter, Trump used this technique. He now uses it as president.

Hyperbole is an extravagant overstatement that can be positive or negative in character. When describing himself and his plans, Trump makes extensive use of positive hyperbole: he is the best and every plan of his is the best. He also makes extensive use of negative hyperbole—often to a degree that crosses the line from exaggeration to fabrication. In any case, his concept of “truthful hyperbole” is worth considering.

From a logical standpoint, “truthful hyperbole” is an impossibility. This is because hyperbole is, by definition, not true.  Hyperbole is not merely a matter of using extreme language. After all, extreme language might accurately describe something. For example, describing pedophiles as horrible would be spot on. Hyperbole is a matter of exaggeration that goes beyond the facts. For example, describing Donald Trump the evilest being in all of space and time would be hyperbole. As such, hyperbole is always untrue. Because of this, the phrase “truthful hyperbole” means the same as “accurate exaggeration”, which reveals the problem.

Trump, a master of rhetoric, is right about the rhetorical value of hyperbole—it can have great psychological force. It, however, lacks logical force, as it provides no logical reason to accept a claim. Trump is right that there can be innocent exaggeration. I will now turn to the ethics of hyperbole.

Since hyperbole is (by definition) untrue, there are two main concerns. One is how far the hyperbole deviates from the truth. The other is whether exaggeration is harmless. While a hyperbolic claim is necessarily untrue, it can deviate from the truth in varying degrees. As with fish stories, there is some moral wiggle room in terms of the proximity to the truth. While there is no exact line (to require that would be to fall into the line drawing fallacy) that defines the exact boundary of morally acceptable exaggeration, some untruths surely go beyond that line. This line varies with the circumstances—the ethics of fish stories, for example, differs from the ethics of job interviews.

While hyperbole is untrue, it must have some anchor in the truth. If it does not, then it is not exaggeration but pure fabrication. This is the difference between containing some truth and being devoid of truth. Naturally, hyperbole can be mixed in with fabrication. For example, consider Trump’s claim about the 9/11 attack that “in Jersey City, N.J., where thousands and thousands of people were cheering as that building was coming down. Thousands of people were cheering.”

If Trump had claimed that some people in America celebrated the terrorist attacks on 9/11, then that is almost certainly true—there was surely at least one person who did this. If  he had claimed that dozens of people in America celebrated the 9/11 attacks and this was broadcast on TV, then this might be an exaggeration as we do not know how many people in America celebrated but it also includes a fabrication (the TV part). If he had claimed that hundreds did so, the exaggeration would be considerable. But Trump, in his usually style, claimed that thousands and thousands celebrated.  This exaggeration might be extreme. Or it might not—thousands might have celebrated in secret, although this is a wildly implausible claim. However, the claim that people were filmed celebrating in public and video existed for Trump to see is a fabrication rather an exaggeration.  

One way to help determine the ethical boundaries of hyperbole is to consider the second concern, namely whether the hyperbole (untruth) is harmless or not. Trump is right to claim there can be innocent forms of exaggeration. This can be taken as exaggeration that is morally acceptable and can be used as a basis to distinguish such hyperbole from unethical lying.

One realm in which exaggeration is innocent is storytelling. Aristotle, in the Poetics, notes that “everyone tells a story with his own addition, knowing his hearers like it.” While a lover of truth Aristotle recognized the role of untruth in good storytelling, saying that “Homer has chiefly taught other poets the art of telling lies skillfully.” The telling of tall tales that feature even extravagant extravagation is morally acceptable because the tales are intended to entertain—that is, the intention is good. In the case of exaggerating in stories to entertain the audience or a small bit of rhetorical “shine” to polish a point, the exaggeration is harmless—which makes sense if  one thinks Trump sees himself as an entertainer.

In contrast, exaggerations that have a malign intent would be morally wrong. Exaggerations that are not intended to be harmful yet prove to be so would also be problematic—but discussing the complexities of intent and consequences would take this essay to far afield.

The extent of the exaggeration would also be relevant here, the greater the exaggeration that is aimed at malign purposes or that has harmful consequences, the worse it would be morally. After all, if deviating from the truth is (generally) wrong, then deviating from it more would be worse. In the case of Trump’s claim about thousands of people celebrating on 9/11, this untruth fed into fear, racism and religious intolerance. As such, it was not an innocent exaggeration, but a malign untruth.

During ethical discussions about abortion, I am sometimes asked if I believe that person who holds the anti-abortion position must be a misogynist. While there are misogynists who are anti-abortion, I hold to the obvious: there is no necessary connection between being anti-abortion and being a misogynist. A misogynist hates women, while a person who holds an anti-abortion position believes that abortion is morally wrong. There is no inconsistency between holding the moral position that abortion is wrong and not being a hater of women. In fact, an anti-abortion person could have a benevolent view towards all living beings and be morally opposed to harming any of them, including zygotes and women.

While misogynists would tend to be anti-choice because of their hatred of women, they need not be anti-abortion . That is, hating women and wanting to deny them the choice to have an abortion does not entail that a person believes that abortion is morally wrong. For example, a misogynist could be fine with abortion (such as when it is convenient to him) but think that it should be up to the man to decide if or when a pregnancy is terminated. A misogynist might even be pro-choice for various reasons; but almost certainly not because he is a proponent of the rights of women.  As such, there is no necessary connection between the two views.

There is also the question of whether a pro-choice position is a cover for misogyny. The easy and obvious answer is that sometimes it is and sometimes it is not. Since it has been established that a person can be anti-abortion without being a misogynist, it follows that being anti-abortion need not be a cover for misogyny. However, it can provide cover for such a position. It is easier to sell the idea of restricting abortion by making a moral case against it than by expressing hatred of women and a desire to restrict their choices and reproductive option. Before progressing with the discussion, it is important to address two points.

The first point is that even if it is established that an anti-abortion person is a misogynist, this does not entail that the person’s position on the issue of abortion is mistaken. To reject a misogynist’s claims or arguments regarding abortion (or anything) on the grounds that they are a misogynist is to commit a circumstantial ad hominem.

This sort of Circumstantial ad Hominem involves substituting an attack on a person’s circumstances (such as the person’s religion, political affiliation, ethnic background, etc.) for reasons against her claim. This version has the following form:

 

Premise 1. Person A makes claim X.

Premise 2. Person B makes an attack on A’s circumstances.

Conclusion. Therefore X is false.

 

A Circumstantial ad Hominem is a fallacy because a person’s circumstances (religion, political affiliation, etc.) do not affect the truth or falsity of the claim. This is clear from following example: “Bill claims that 1+1 =2. But he is a Republican, so his claim is false.” As such, to assert that the anti-abortion position is in error because some misogynist holds that view would be an error in reasoning.

A second important point is that a person’s consistency or lack in terms of their principles or actions has no relevance to the truth of their claims or the strength of their arguments. To think otherwise is to fall victim to the ad hominem tu quoque fallacy. This fallacy is committed when it is concluded that a person’s claim is false because 1) it is inconsistent with something else a person has said or 2) what a person says is inconsistent with her actions. This type of “argument” has the following form:

 

Premise 1. Person A makes claim X.

Premise 2. Person B asserts that A’s actions or past claims are inconsistent with the truth of claim X.

Conclusion. Therefore, X is false.

 

The fact that a person makes inconsistent claims does not make any specific claim they make false (although of any pair of inconsistent claims only one can be true while both can be false). Also, the fact that a person’s claims are not consistent with their actions might indicate that the person is a hypocrite, but this does not prove their claims are false.

A person’s inconsistency also does not show that the person does not believe their avowed principle as they might be ignorant of its implications. That said, such inconsistency could be evidence of hypocrisy. While sorting out a person’s actual principles is not relevant to logical assessment of the person’s claims, doing so is relevant to many types of decision making regarding the person. One area where sorting out a person’s principles matters is voting. In the next essay, this matter will be addressed.

After Cecil the Lion was shot in 2015, the internet erupted in righteous fury against the killer. But some argued against feeling bad for Cecil, sometimes accusing the mourners of being phonies and pointing out that lions kill people. What caught my attention, however, was the use of a common rhetorical tactic—to “refute” those condemning Cecil’s killing by claiming the  “lion lovers” do not get equally upset about fetuses killed in abortions.

When HitchBOT was destroyed, in 2015, there was a similar response. When I have written about ethics and robots, I have been criticized on the same grounds: it has been claimed that I value robots more than fetuses. Presumably they think I have made an error in my arguments about robots. Since I find this tactic interesting and have been its target, I thought it would be worth my while examining it in a reasonable and fair way.

One way to look at this approach is to take it as an application of the Consistent Application method. A moral principle is consistently applied when it is applied in the same way to similar beings in similar circumstances. Inconsistent application is a problem because it violates three commonly accepted moral assumptions: equality, impartiality and relevant difference.

Equality is the assumption that moral equals must be treated as such. It also requires that those that are not morally equal be treated differently.  Impartiality is the assumption that moral principles must not be applied with undue bias. Inconsistent application would involve biased application.

Relevant difference is a common moral assumption. It is the view that different treatment must be justified by relevant differences. Sorting out which differences are relevant can involve controversy. For example, people disagree about whether gender is a relevant difference in how people should be treated.

Using the method of Consistent Application to criticize someone involves showing that a principle or standard has been applied differently in situations that are not relevantly different. This allows one to conclude that the application is inconsistent, which is generally regarded as a problem. The general form is as follows:

 

Step 1: Show that a principle/standard has been applied differently in situations that are not adequately different.

Step 2: Conclude that the principle has been applied inconsistently.

Step 3 (Optional): Insist that the principle be applied consistently.

 

Applying this method often requires determining the principle being used. Unfortunately, people are not often clear about their principles, even if they are operating in good faith. In general, people tend to just make moral assertions. In some cases, it is likely that people are not even aware of the principles they are appealing to when making moral claims.

Turning now to the cases of the lion, the HitchBOT and the fetus, this method could be applied as follows:

 

Step 1: Those who are outraged at the killing of the lion are using the principle that the killing of living things is wrong. Those outraged at the destruction of HitchBOT are using the principle that helpless things should not be destroyed. These people are not outraged by abortions in general and Planned Parenthood abortions in particular.

Step 2: The lion and HitchBOT mourners are not consistent in their application of the principle since fetuses are helpless (like HitchBOT) and living things (like Cecil the lion).

Step 3 (Optional): Those mourning for Cecil and HitchBOT should mourn for the fetuses and oppose abortion in general and Planned Parenthood in particular.

 

This sort of use of Consistent Application is appealing, and I routinely use the method myself. For example, I have argued (in a reverse of this situation) that people who are anti-abortion should also be anti-hunting and that people who are fine with hunting should also be morally okay with abortion.

As with any method of arguing, there are counter arguments. In the case of this method, there are three general reasonable responses to an effective use. The first is to admit the inconsistency and stop applying the principle in an inconsistent manner. This obviously does not defend against the charge but can be an honest reply. People, as might be imagined, rarely take this option.

A second way to reply (and an actual defense) is to dissolve the inconsistency by showing that the alleged inconsistency is merely apparent. One way to do this is by showing that there is a relevant difference (or differences). For example, someone who wants to morally oppose the shooting of Cecil while being morally tolerant of abortions could argue that an adult lion has a moral status different from a fetus. One common approach is to note the relation of the fetus to the woman and how a lion is an independent entity. The challenge lies in making a case for the relevance of the difference or differences.

A third way to reply is to reject the attributed principle. In the situation at hand, the assumption is that a person is against killing the lion simply because it is alive. However, that might not be the principle the person is, in fact, using. His principle might be based on the suffering of a conscious being and not on mere life. In this case, the person would be consistent in his application.

Naturally enough, the true principle is still subject to evaluation. For example, it could be argued the suffering principle is wrong and that the life principle should be accepted instead. In any case, this method is not an automatic “win.”

An alternative interpretation of this tactic is to regard it as an ad homimen: An ad Hominem is a general category of fallacies in which a claim or argument is rejected based on some irrelevant fact about the person presenting the claim or argument. Typically, this fallacy involves two steps. First, an attack against the character of person making the claim, her circumstances, or her actions is made. Second, this attack is taken to be evidence against the claim or argument the person in question is making (or presenting). This type of “argument” has the following form:

 

Premise 1. Person A makes claim X.

Premise 2. An irrelevant attack is made against Person A.

Conclusion:  Therefore, A’s claim is false.

 

The reason why an ad Hominem (of any kind) is a fallacy is that the character, circumstances, or actions of a person do not (in most cases) have a bearing on the truth or falsity of the claim being made (or the quality of the argument being made).

In the case of the lion, the HitchBOT and the fetus, the reasoning can be seen as follows:

 

Premise 1. Person A claims that killing Cecil was wrong or that destroying HitchBOT was wrong.

Premise 2. A does not condemn abortions in general or Planned Parenthood’s abortions.

Conclusion. Therefore, A is wrong about Cecil or HitchBOT.

 

Obviously enough, a person’s view of abortion does not prove or disprove her view about the ethics of the killing of Cecil or HitchBOT (although a person can, of course, be engaged in inconsistency or other errors—but these are different matters).

A third alternative is that the remarks are not meant as an argument and the point is to assert that lion lovers and bot buddies are awful people or, at best, misguided.

The gist of the tactic is, presumably, to make these people seem bad by presenting a contrast: “these lion lovers and bot buddies are broken up about lions and trashcans, but do not care about fetuses. What awful people they are.”

But moral concern is not a zero-sum game. That is, regarding the killing of Cecil as wrong and being upset about it does not entail that a person thus cares less (or not at all) about fetuses. After all, people do not just get a few “moral dollars” to spend, so that being concerned about one misdeed entails they cannot be concerned about another. A person can condemn the killing of Cecil and condemn abortion.

The obvious response is that there are people who condemned the killing of Cecil or the destruction of HitchBOT and are pro-choice. These people, it can be claimed, are morally awful. The obvious counter is that while it is easy to claim such people are morally awful, the challenge lies in showing that they are awful. That is, that their position on abortion is morally wrong. Noting that they are against lion killing or bot bashing and pro-choice does not show they are in error. Although, as noted above, they could be challenged on the grounds of consistency. But this requires laying out an argument rather than merely juxtaposing their views on these issues. This version of the tactic simply amounts to asserting or implying that there is something wrong with the person because one disagrees with that person. But a person who thinks that hunting lions or bashing bots is okay and that abortion is wrong, does not prove that the opposing view is in error. It just states the disagreement.

Since the principle of charity requires reconstructing and interpreting arguments in the best possible way, I endeavor to cast this sort of criticism as a Consistent Application attack rather than the other two. This approach is respectful and, most importantly, helps avoid creating a straw man of the opposition.