Reasoning is like chainsaw: useful when used properly, but when used badly it can create a bloody mess. While this analogy can be applied broadly to logic, this essay focuses on the inductive generalization and how it can become a wayward chainsaw under the influence of fear. I’ll begin by looking at our good friend the inductive generalization.

Consisting of a premise and a conclusion, the inductive generalization is a simple argument:

 

Premise 1: P% of observed Xs are Ys.

Conclusion: P% of all Xs are Ys.

 

The quality of an inductive generalization depends on the quality of the first premise, which is usually called the sample. The larger and more representative the sample, the stronger the argument (the more likely it is that the conclusion will be true if the premise is true). There are two main ways in which an inductive generalization can be flawed. The first is when the sample is too small to adequately support the conclusion. For example, a person might have a run-in with a single bad driver from Ohio and conclude all Ohio drivers are terrible. This is known as the fallacy of hasty generalization.

The second is when there is a biased sample, one that does not represent the target population. For example, concluding that most people are Christians because everyone at a Christian church is a Christian would be a fallacy. This is known as the fallacy of biased generalization.

While these two fallacies are well known, it is worth considering them in the context of fear: the fearful generalization. On the one hand, it is not new: a fearful generalization is a hasty generalization or a biased generalization. On the other hand, the hallmark of a fearful generalization (that is fueled by fear) makes it worth considering, especially since addressing the fueling fear seems to be key to disarming this sort of poor reasoning.

While a fearful generalization is not a new fallacy structurally, it is committed because of the psychological impact of fear. In the case of a hasty fearful generalization, the error is drawing an inference from a sample that is too small, due to fear. For example, a female college student who hears about incidents of sexual harassment on campuses might, from fear, infer that most male students are likely to harass her. As another example, a person who hears about an undocumented migrant who commits a murder might, from fear, infer that many  undocumented migrants are murderers. Psychologically (rather than logically), fear fills out the sample, making it feel like the conclusion is true and adequately supported. However, this is an error in reasoning.

The biased fearful generalization occurs when the inference is based on a sample that is not representative, but this is overlooked due to fear. Psychologically, fear makes the sample feel representative enough to support the conclusion. For example, a person might look at arrest data about migrants and infer that most migrants are guilty of crimes. A strong generalization about what percentage of migrants commits crimes needs to include the entire population, not a sample consisting just of those arrested.

As another example, if someone terrified of guns looks at crime data about arrests involving firearms and infers that most gun owners are criminals, this would be a biased generalization. This is because those arrested for gun crimes do not represent the entire gun-owning population. A good generalization about what percentage of gun-owners commit crimes needs to include the general population, not just those arrested.

When considering any fallacy, there are three things to keep in mind. First, not everything that looks like a fallacy is a fallacy. After all, a good generalization has the same structure as a hasty or biased generalization. Second, concluding a fallacy must have a false conclusion is a fallacy (the fallacy fallacy). So, a biased or hasty generalization could have a true conclusion; but it would not be supported by the generalization. Third, a true conclusion does not mean that a fallacy is not a fallacy. For example, a hasty generalization could have a true conclusion—the problem lies in the logic, not the truth of the conclusion. For example, if I see one red squirrel in a forest and infer all the squirrels there are red, then I have made a hasty generalization, even if I turn out to be right. The truth of the conclusion does not mean that I was reasoning well. It is like a lucky guess on a math problem: getting the right answer does not mean that I did the math properly. But how does one neutralize the fearful generalization?

On the face of it, a fearful generalization would seem to be easy to neutralize. Just present the argument and consider the size and representativeness of the sample in an objective manner. The problem is that a fearful generalization is motivated by fear and fear impedes rationality and objectivity. Even if a fearful person tries to consider the matter, they might persist in their errors. To use an analogy, I have an irrational fear of flying. While I know that air travel is the safest form of travel this has no effect on my fear. Likewise, someone who is afraid of migrants or men might be able to do the math yet persist in their fearful conclusion. As such, a way of dealing with fearful generalizations would be the best way to deal with fear in general, but this goes beyond the realm of critical thinking and into the realm of virtue.

One way to try to at least briefly defuse the impact of fear is to try the method of substitution. The idea is to replace the group one fears with a group that one belongs too, likes or at least does not fear. This works best when the first premise remains true when the swap is made, otherwise the person can obviously reject the swap. This might have some small impact on the emotional level that will help a person work through the fear—assuming they want to. I will illustrate the process using Chad, a hypothetical Christian white male gun owner who is fearful of undocumented migrants (or illegals, if you prefer).

Imagine that Chad reasons like this:

 

Premise 1: Some migrants have committed violent crimes in America.

“Premise” 2: I (Chad) am afraid of migrants.

Conclusion: Many migrants are violent criminals.

 

As “critical thinking therapy” Chad could try swapping in one of his groups and see if his logic still holds.

 

Premise 1: Some white men have committed violent crimes in America.

“Premise” 2: I (Chad) am a white man.

Conclusion: Many white men are violent criminals.

 

Chad would agree that each argument starts with a true first premise, but Chad would presumably reject the conclusion of the second argument. If pressed on why this is the case, Chad would presumably point out that the statistical data does not support the conclusion. At this point, a rational Chad would realize that the same applies to the first argument as well. If this does not work, one could keep swapping in groups that Chad belongs to or likes until Chad is able to see the bias caused by his fear or one gets exhausted by Chad.

This method is not guaranteed to work (it probably will not), but it does provide a useful method for those who want to check their fears. Self-application involves the same basic process: swapping in your groups or groups you like in place of what you fear to see if your reasoning is good or bad.

In July of 2002 the New England Journal of Medicine published a study on arthroscopic surgery.

The experimental group members underwent surgery while the control group received placebo surgeries.  Somewhat surprisingly, those receiving the placebo reported feeling better and performed better at walking and stair climbing than those in the experimental group. After reading this study, I wrote “Lies…the Best Medicine?” and it appeared in my What Don’t You Know? While working through my massive backlog of magazines, I came across an update on placebo surgeries in Scientific America in which Claudia Wallis argued in favor of fake operations. Reminded of my ancient essay, I am revisiting thoughts on the ethics of placebo surgeries.

As in my old essay, I think that there is a good argument against placebo surgery. Treating a patient with a placebo requires deception. If the effect requires the patient to believe they have received surgery, then the patient must be convinced of an untruth. If the medical personnel are honest and tell the patient the the surgery was fake, then they would, presumably, not benefit from it. If it is wrong to lie, then this deceit would be wrong. What would make it even worse is that medical personnel should be honest with patients.  Thus, even if placebo surgery is effective or even more effective than real surgery, then it should not be used.

One counter to this argument is that even when patients know they are receiving a placebo, it can still be effective. Medical personnel could be honest with patients about a placebo surgery and, perhaps, still maintain the effectiveness of the non-treatment. This would allow the use of placebo surgery while avoiding the moral problem. However, this does not solve the problem for cases in which patients must not know whether they are receiving surgery or the placebo. Placebo surgery is often used to test the effectiveness of surgeries in a rigorous manner. If the surgery is no better (or even worse) than a placebo, then there would be no medical reason to use the surgery over a placebo or no surgery at all.

It can be argued that deception in such situations is acceptable. One approach is to use examples of acceptable, beneficial deception. Obvious examples include the benign deceits about Santa Claus, the Easter Bunny and the Tooth Fairy. As another illustration, there are lies people tell to avoid causing others suffering. If this sort of benign deceit is acceptable, then so is the use of deceit to produce the placebo effect or to conduct a study for the greater good.

A second approach is to focus on the purpose of the medical profession. While philosophers and scientists are supposed to seek the truth, the end of medicine is to relieve pain and prevent or cure illnesses. If deception, in the form of a placebo, can achieve the end of medicine, then it is one more tool, like a scalpel or drug. In fact, it could be argued that effective placebos are even better than drugs or surgery. Surgery always involves some risk, and most drugs have side effects. Placebos would, presumably, involve little or no risk. That said, it is worth considering that there could also be mental side-effects with placebos.

Since placebo treatment is usually not free, it could be objected that it is still wrong: patients are charged, and nothing has been done for them. If medical personnel were using placebos to cover up illnesses and injuries while pocketing profits from fake treatment, then that would be unethical. However, if the treatment is honest and works then it would be as legitimate as any other form of treatment. So, if a patient needs to see a doctor to get the placebo effect working properly and it works as well or better than the “real” treatment, then it is as reasonable to bill for the placebo treatment as it is a real treatment—although the price should be adjusted accordingly. If the placebo effect could be created without involving medical personnel, then charging patients for it would be unethical.

In the case of studies in which the subjects are not paying, then there would be no special moral concern for the use of the placebo. Its use would, in fact, be required for a proper experiment. This does raise the usual moral concerns about conducting experiments, but that is a subject worthy of consideration on its own.