When engaged in rational threat assessment, there are two main factors that need to be considered. The first is the probability of the threat. The second is the severity of the threat. These two can be combined into one sweeping question: “how likely is it that this will happen and, if it does, how bad will it be?”
Making rational decisions about dangers involves considering both factors. For example, consider the risks of going to a crowded area such as a movie theater or school. There is a high probability of being exposed to the cold virus, but it is (for most people) not a severe threat. There is a low probability that there will be a mass shooting on my campus, but it is a high severity threat.
Our survival as a species seems to have been despite our poor skills at rational threat assessment. To be specific, the worry people feel about a threat generally does not match up with the probability of the threat occurring. People seem somewhat better at assessing severity, though we often get this wrong.
One excellent example of poor threat assessment is the fear Americans have about terrorism. Between 1975 and 2025 3,577 Americans died as the result of terrorism, which accounted for .35% of all murders in the US in that time. If you are in the United States now, your odds of being killed in such an attack is about 1 in 4 million per year. This includes all forms of terrorism, although you would now be statistically most likely to be killed by right-wing terrorists.
While being killed by terrorists in the United States is unlikely, some people are terrified by the possibility (which is, of course, the goal of terrorism). Given that an American is more likely to be killed while driving than by a terrorist, it might be wondered why people are so bad at threat assessment. The answer, in terms of feeling fear vastly out of proportion to probability, involves a cognitive bias and some classic fallacies.
People (probably) follow general rules when they estimate probabilities and the ones we use unconsciously are called heuristics. While the right way to estimate probability is to use statistical methods, people often fall victim to the availability heuristic. This is when a person unconsciously assigns a probability based on how often they think of something. While something that occurs often is likely to be thought of often, thinking of something more often does not make it more likely to occur.
After an act of terrorism, people think about terrorism more often and tend to unconsciously believe that the chance of terror attacks occurring is higher than it really is. To use a non-terrorist example, when people hear about a shark attack, they tend to think that the chances of it occurring are high—even though the probability is low (driving to the beach is much more likely to kill you). The defense against this bias is to find reliable statistical data and use that as the basis for inferences about threats—that is, think it through rather than trying to feel through it. This is, of course, difficult: people tend to regard their feelings, however unwarranted, as the best evidence—despite usually being the worst evidence.
People are also misled about probability by fallacies. One is the spotlight fallacy. The spotlight fallacy is committed when a person uncritically assumes that all (or many) members or cases of a certain type are like those that receive the most attention or coverage in the media. After an incident involving terrorists who are Muslim, media attention will focus on that fact, often leading people who are poor at reasoning to infer that most Muslims are terrorists. This is the exact sort of mistake that would occur if it were inferred that most veterans are terrorists because the media covered a terrorist who was a military veteran. If people believe that, for example, most Muslims are terrorists, then they will make incorrect inferences about the probability of a terrorist attack by Muslims in the United States. This is distinct from someone simply lying about, for example, Muslims and claiming they are terrorists because the person is a bigot or wants to exploit the fear they create.
Anecdotal evidence is another fallacy that contributes to poor inferences about the probability of a threat. This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy also occurs when someone rejects reasonable statistical data supporting a claim in favor of one example or small number of examples that go against the claim. This fallacy is like hasty generalization and a similar sort of error is committed, namely drawing an inference based on a sample that is inadequate in size relative to the conclusion. The main difference between hasty generalization and anecdotal evidence is that the fallacy anecdotal evidence involves using a story (anecdote) as the sample. Out in the wild, it can be difficult to tell whether a fallacy is a hasty generalization or anecdotal evidence, fortunately what matters is recognizing a fallacy is a fallacy even if it is not clear which one it is.
People fall victim to this fallacy because stories and anecdotes usually have more emotional and psychological impact than statistical data. This leads people to infer that what is true in an anecdote must be true of the whole population or that an anecdote justifies rejecting statistical evidence in favor of an anecdote. Not surprisingly, people most often accept this fallacy because they want to believe that what is true in the anecdote is true for the whole population.
In the case of terrorism, people use both anecdotal evidence and hasty generalization: they point to a few examples of terrorism or tell a story about a specific incident, and then draw an unwarranted conclusion about the probability of a terrorist attack occurring in the United States. For example, people point out that terrorists have masqueraded as refugees and infer that refugees in general present a major threat to the United States. Or they might tell the story about one attacker in San Bernardino who arrived in the states on a K-1 (“fiancé”) visa and make unwarranted conclusions about the danger of the entire visa system.
One last fallacy is misleading vividness. This occurs when a very small number of particularly dramatic events are taken to outweigh statistical evidence. This sort of “reasoning” is fallacious because the mere fact that an event is exceptionally vivid or dramatic does not make the event more likely to occur, especially in the face of statistical evidence to the contrary.
People often accept this sort of “reasoning” because particularly vivid or dramatic cases usually make a very strong impression on the mind. For example, mass shootings are vivid and awful, so it is hardly surprising that people often feel they are very much in danger from such attacks. Another way to look at this fallacy in the context of threats is that a person conflates the severity of a threat with its probability. That is, the worse the harm, the more likely a person feels that it will occur. But the vividness of a harm has no connection to the probability it will occur.
That said, considering the possibility of something dramatic or vivid occurring is not always fallacious. For example, a person might decide to never go sky diving because hitting the ground because of a failed parachute would be very dramatic. If he knows that, statistically, the chances of the accident happening are very low but he considers even a small risk unacceptable, then he would not be committing this fallacy. This then becomes a matter of value judgment—how much risk a person is willing to tolerate relative to the severity of the potential harm.
The defense against these fallacies is to use a proper statistical analysis as the basis for inferences about probability. As noted above, there is still the psychological problem: people tend to act on the basis on how they feel rather than what the facts show.
Such rational assessment of threats is important for both practical and moral reasons. The matter of terrorism is no exception to this. Since society has limited resources, rationally using them requires considering the probability of threats rationally—otherwise resources are being misspent. For example, spending billions to counter an unlikely threat while spending little on major causes of harm would be irrational (if the goal is to protect people from harm). There is also the concern about the harm of creating unfounded fear. In addition to the psychological harm to individuals, there is also the damage to the social fabric. While creating unwarranted fear is useful for grifters, pundits and politicians, it is bad for the rest of us and thinking things through is a way to protect yourself from needless fear and those who wish to exploit it.
