The question “why lie if the truth would suffice” can be interpreted in at least three ways. One is as an inquiry about the motivation and asks for an explanation. A second is as an inquiry about weighing the advantages and disadvantages of lying. The third way is as a rhetorical question that states, under the guise of inquiry, that one should not lie if the truth would suffice.

Since a general discussion of this question would be rather abstract, I will focus on a specific example and use it as the basis for the discussion. Readers should, of course, construct their own examples using their favorite lie from those they disagree with. I will use Trump’s response to the Democrats’ Green New Deal as my example. While this is something of a flashback to his first term, Trump recently signed an executive order targeting the old Green New Deal.

In 2019 the Democrats proposed a Green New Deal aimed at addressing climate change and economic issues. As with any proposal, rational criticisms can be raised against it. In his first term, Trump claimed the Democrats intend “to permanently eliminate all Planes, Cars, Cows, Oil, Gas & the Military – even if no other country would do the same.”  While there are some Democrats who would do these things, the Democratic Party favors none of that. Looked at rationally, it seems to make no sense to lie about the Green New Deal. If it is bad enough to reject on its own defects, lies would not be needed. If one must lie to attack it, this suggests a lack of arguments against it. To use an analogy, if a prosecutor lies to convict a person, this suggests they have no case—otherwise they would rely on evidence. So, why would Trump lie if the truth would suffice to show the Green New Deal is a terrible plan?

The question of why Trump (or anyone else) lies when the truth would suffice is a matter for psychology, not philosophy. So, I will leave that question to others. This leaves me with the question about the advantages and disadvantages of lying along with the rhetorical question.

The lie about the Green New Deal is a good example of hyperbole and a straw man. Trump himself claims to use the tactic of “truthful hyperbole”. Hyperbole is a rhetorical device in which one makes use of extravagant overstatement, such as claiming that the Democrats plan to eliminate cows. The reason hyperbole is not just called lying is because it is a specific type of untruth and must have a foundation in truth. Hyperbole involves inflating or exaggerating something true rather than a complete fiction. The Green New Deal is aimed at making America carbon neutral and this would impact cars, cows, planes, oil, gas and the military. The extravagant exaggeration is that the proposal would eliminate all of them permanently. This would be as if someone proposed cutting back on dessert at family dinners and they were accused of wanting to eliminate meals permanently. Since hyperbole is rhetoric without logic, it has no logical force and does not prove (or disprove) anything. But it can have considerable psychological force in influencing people to believe a claim.

Hyperbole is often used in conjunction with the Straw Man fallacy. This fallacy is committed when a person’s actual position is ignored and a distorted, exaggerated or misrepresented version of that position is criticized in its place. This sort of “reasoning” has the following pattern:

 

Premise 1: Person A has position X.

Premise 2: Person B presents position Y (a distorted version of X).

Premise 3: Person B attacks position Y.

Conclusion: Therefore, X is false or bad.

 

This sort of “reasoning” is fallacious because attacking a distorted version of a position is not a criticism of the actual position. One might as well expect an attack on a poor drawing of a person to hurt the person.

Like hyperbole, the Straw Man fallacy is not based on a simple lie: it involves an exaggeration or distortion of something true. In the case of Trump and the Green New Deal, his “reasoning” is that the Green New Deal should be rejected because his hyperbolic straw man version of it is terrible. Since this is a fallacy, his “reasons” do not support his claim. It is, as always, important to note that Trump could be right about the Green New Deal being a bad idea, but not for the “reasons” he gives. To infer that a fallacy must have a false conclusion is itself a fallacy (the fallacy fallacy).

While hyperbole has no logical force and a straw man is a fallacy, there are advantages to using them. One advantage is that they are much easier than coming up with good reasons. Criticizing the Green New Deal for what it is requires knowing what it is and considering possible defects which take time and effort. Tweeting out a straw man takes seconds.

The second advantage is that hyperbole and straw men often work, often much better than the truth. In the case of complex matters, people rarely do their homework and do not know that a straw man is a straw man. I have interacted with people who honestly think Democrats plan to eliminate planes and cars. Since this is a bad idea, they reject it, not realizing that is not the Green New Deal. An obvious defense against hyperbole and straw man is to know the truth. While this can take time and effort, someone who has the time to post on Facebook or Twitter, has the time to do basic fact checking. If not, their ignorance should command them to remain silent, though they have the right to express their unsupported views.

As far as working better than the truth, hyperbole or straw man appeals to the target’s fears, anger or hope. They are thus motivated to believe in ways that truth cannot match. People generally find rational argumentation dull and unmoving, especially about complicated issues.  If Trump honestly presented real problems with the Green New Deal, complete with supporting data and graphs, he would bore most and lose his audience. By using a straw man, he better achieves his goal. This does allow for a pragmatic argument for lying because the truth will not suffice.

If telling the truth would not suffice to convince people, then there is the pragmatic argument that if lying would do the job, then it should be used. For example, if going into an honest assessment of the  Green New Deal would bore people and lying would get the job done, then Trump should lie if he wants to achieve his goal. This does, however, raise  moral concerns.

If the reason the truth would not suffice is because it does not logically support the claim, then it would be immoral to lie. To use a non-political example, if you would not invest in my new fake company iScam if you knew it was a scam, getting you to invest in it by lying would be wrong. So, if the New Green Deal would not be refuted by the truth, Trump’s lies about it would be immoral.

But, what about cases in which the truth would logically support a claim, but the truth would not persuade people to accept that claim? Going back to the Green New Deal example, suppose it is terrible but explaining its defects would bore people and they would remain unpersuaded. But a straw man version of the Green New Deal would persuade many people to reject this hypothetically terrible plan? From a utilitarian standpoint, the lie could be morally justified; if the good of lying outweighed the harm, then it would be the right thing to do. To use an analogy, suppose you were trying to convince a friend to not start a dangerous diet. You have scientific data and good arguments, but you know your friend is bored by data and is largely immune to logic. So, telling them the truth would mean that they would go on the diet and harm themself. But, if you exaggerate the harm dramatically, your friend will abandon the diet. In such a case, the straw man argument would seem to be morally justified as you are using it to protect your friend.

While this might seem to justify the general use of hyperbole and the straw man, it only justifies their use when the truth does suffice logically but does not suffice in terms of persuasive power. That is, the fallacy is only justified as a persuasive device when there are non-fallacious arguments that would properly establish the same conclusion.

Reasoning is like chainsaw: useful when used properly, but when used badly it can create a bloody mess. While this analogy can be applied broadly to logic, this essay focuses on the inductive generalization and how it can become a wayward chainsaw under the influence of fear. I’ll begin by looking at our good friend the inductive generalization.

Consisting of a premise and a conclusion, the inductive generalization is a simple argument:

 

Premise 1: P% of observed Xs are Ys.

Conclusion: P% of all Xs are Ys.

 

The quality of an inductive generalization depends on the quality of the first premise, which is usually called the sample. The larger and more representative the sample, the stronger the argument (the more likely it is that the conclusion will be true if the premise is true). There are two main ways in which an inductive generalization can be flawed. The first is when the sample is too small to adequately support the conclusion. For example, a person might have a run-in with a single bad driver from Ohio and conclude all Ohio drivers are terrible. This is known as the fallacy of hasty generalization.

The second is when there is a biased sample, one that does not represent the target population. For example, concluding that most people are Christians because everyone at a Christian church is a Christian would be a fallacy. This is known as the fallacy of biased generalization.

While these two fallacies are well known, it is worth considering them in the context of fear: the fearful generalization. On the one hand, it is not new: a fearful generalization is a hasty generalization or a biased generalization. On the other hand, the hallmark of a fearful generalization (that is fueled by fear) makes it worth considering, especially since addressing the fueling fear seems to be key to disarming this sort of poor reasoning.

While a fearful generalization is not a new fallacy structurally, it is committed because of the psychological impact of fear. In the case of a hasty fearful generalization, the error is drawing an inference from a sample that is too small, due to fear. For example, a female college student who hears about incidents of sexual harassment on campuses might, from fear, infer that most male students are likely to harass her. As another example, a person who hears about an undocumented migrant who commits a murder might, from fear, infer that many  undocumented migrants are murderers. Psychologically (rather than logically), fear fills out the sample, making it feel like the conclusion is true and adequately supported. However, this is an error in reasoning.

The biased fearful generalization occurs when the inference is based on a sample that is not representative, but this is overlooked due to fear. Psychologically, fear makes the sample feel representative enough to support the conclusion. For example, a person might look at arrest data about migrants and infer that most migrants are guilty of crimes. A strong generalization about what percentage of migrants commits crimes needs to include the entire population, not a sample consisting just of those arrested.

As another example, if someone terrified of guns looks at crime data about arrests involving firearms and infers that most gun owners are criminals, this would be a biased generalization. This is because those arrested for gun crimes do not represent the entire gun-owning population. A good generalization about what percentage of gun-owners commit crimes needs to include the general population, not just those arrested.

When considering any fallacy, there are three things to keep in mind. First, not everything that looks like a fallacy is a fallacy. After all, a good generalization has the same structure as a hasty or biased generalization. Second, concluding a fallacy must have a false conclusion is a fallacy (the fallacy fallacy). So, a biased or hasty generalization could have a true conclusion; but it would not be supported by the generalization. Third, a true conclusion does not mean that a fallacy is not a fallacy. For example, a hasty generalization could have a true conclusion—the problem lies in the logic, not the truth of the conclusion. For example, if I see one red squirrel in a forest and infer all the squirrels there are red, then I have made a hasty generalization, even if I turn out to be right. The truth of the conclusion does not mean that I was reasoning well. It is like a lucky guess on a math problem: getting the right answer does not mean that I did the math properly. But how does one neutralize the fearful generalization?

On the face of it, a fearful generalization would seem to be easy to neutralize. Just present the argument and consider the size and representativeness of the sample in an objective manner. The problem is that a fearful generalization is motivated by fear and fear impedes rationality and objectivity. Even if a fearful person tries to consider the matter, they might persist in their errors. To use an analogy, I have an irrational fear of flying. While I know that air travel is the safest form of travel this has no effect on my fear. Likewise, someone who is afraid of migrants or men might be able to do the math yet persist in their fearful conclusion. As such, a way of dealing with fearful generalizations would be the best way to deal with fear in general, but this goes beyond the realm of critical thinking and into the realm of virtue.

One way to try to at least briefly defuse the impact of fear is to try the method of substitution. The idea is to replace the group one fears with a group that one belongs too, likes or at least does not fear. This works best when the first premise remains true when the swap is made, otherwise the person can obviously reject the swap. This might have some small impact on the emotional level that will help a person work through the fear—assuming they want to. I will illustrate the process using Chad, a hypothetical Christian white male gun owner who is fearful of undocumented migrants (or illegals, if you prefer).

Imagine that Chad reasons like this:

 

Premise 1: Some migrants have committed violent crimes in America.

“Premise” 2: I (Chad) am afraid of migrants.

Conclusion: Many migrants are violent criminals.

 

As “critical thinking therapy” Chad could try swapping in one of his groups and see if his logic still holds.

 

Premise 1: Some white men have committed violent crimes in America.

“Premise” 2: I (Chad) am a white man.

Conclusion: Many white men are violent criminals.

 

Chad would agree that each argument starts with a true first premise, but Chad would presumably reject the conclusion of the second argument. If pressed on why this is the case, Chad would presumably point out that the statistical data does not support the conclusion. At this point, a rational Chad would realize that the same applies to the first argument as well. If this does not work, one could keep swapping in groups that Chad belongs to or likes until Chad is able to see the bias caused by his fear or one gets exhausted by Chad.

This method is not guaranteed to work (it probably will not), but it does provide a useful method for those who want to check their fears. Self-application involves the same basic process: swapping in your groups or groups you like in place of what you fear to see if your reasoning is good or bad.

 

110 Fallacies on Amazon

Also Known as: Appeal to Anecdote

Description:

This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy is also committed when someone rejects reasonable statistical data supporting a claim in favor of a single example or small number of examples that go against the claim. The fallacy is often considered a variation of Hasty Generalization. It has the following forms:

 

Form One

Premise 1:  Anecdote A is told about a member M (or small number of members) of Population P.

Premise 2: Anecdote A says that M is (or is not) C.

Conclusion: Therefore, C is (or is not) true of Population P.

 

Form Two

Premise 1: Reasonable statistical evidence S exists for general claim C.

Premise 2:  Anecdote A is presented that is an exception to or goes against general claim C.

Conclusion: General claim C is false

 

This fallacy is like Hasty Generalization in that a similar error is committed, namely drawing an inference based on a sample that is inadequate in size. One difference between Hasty Generalization and Anecdotal Evidence is that the fallacy of Anecdotal Evidence involves using a story (anecdote) as the sample. The more definitive distinction is that the second form of Anecdotal Evidence involves a rejection of statistical evidence for a general claim.

People often fall victim to this fallacy because stories and anecdotes usually have more psychological influence than statistical data. This persuasive force can cause people to infer that what is true in an anecdote must be true of the whole population or that an anecdote justifies rejecting statistical evidence. People often accept this fallacy because they would prefer that what is true in the anecdote be true for the whole population (a form of Wishful Thinking). For example, a person who smokes might try to convince herself that smoking will not hurt her because her Aunt Jane smoked 52 cigars a day and lived, cancer free, until she was 95.

People also fall for this fallacy when the anecdote matches their biases (positive or negative) or prejudices. For example, a person who fears and dislikes immigrants might believe that immigrants are likely to commit crimes because of an anecdote they hear about an immigrant who committed a crime. A person who has a very favorable view of immigrants might be swayed by an anecdote about an exceptional immigrant and infer that most immigrants will be exceptional.

As the example suggests, this sort of poor reasoning can be used in the context of causal reasoning. In addition to cases involving individual causation (such as Jane not getting cancer) this poor reasoning is commonly applied to causal claims about populations. What typically occurs is that a person rejects a general causal claim such as smoking causes cancer in favor of an anecdote in which a person smoked but did not get cancer. While this anecdote does show that not everyone who smokes gets cancer, it does not prove that smoking does not cause cancer.

This is because establishing that C is a causal factor for effect E in population P is a matter of showing that there would be more cases of E if all members of P were exposed to C than if none were. Showing that there are some anecdotal cases in which members of P were exposed to C but did not show effect E does not show that C does not cause E. In fact, that is what you should expect to see in most cases.

That said, the exceptions given in the anecdotes can provide a reason to be suspicious of a claimed causal connection, but this suspicion must be proportional to the evidence provided by the anecdote. For example, the fact that Alan Magee survived a fall of 20,000 feet from his B-17 bomber in WWII does show that a human can survive such a fall. However, it does not serve to disprove the general claim that falls from such great heights are usually fatal.

 Anecdotes can also provide the basis for additional research. For example, the fact that some people can be exposed to certain pathogens without getting sick suggests that they would be worth examining to see how their immunity works and whether this could benefit the general population. As another example, the fact that people do sometimes survive falls from aircraft does provide a reason for investigating how this works and how this information might be useful.

 

Defense: The defense against the first version of this fallacy is to keep in mind that an anecdote does not prove or disprove a general claim. It is especially important to be on guard against anecdotes that have strong persuasive force, such as one that are very vivid or nicely line up with biases.

For the second version, the person committing it will ironically raise the red flag for this fallacy. They will admit that they are rejecting statistical evidence in favor of an anecdote. In effect, they are telling you to believe the one piece of evidence they like in favor of the weight of evidence they dislike. To avoid inflicting this fallacy on yourself, be on guard against the tendency to confuse the psychological force of an anecdote with its logical force.

 

Example #1

Jane: “Uncle Bill smoked a pack a day since he was 11 and he lived to be 90. So, all that science and medical talk about smoking being bad is just a bunch of garbage.”

Example #2

John: “Oh no! That woman is bringing pit bull into the dog park! Everyone get their dogs and run away!”

Sally: “Oh, don’t worry. I know that people think that pit bulls are aggressive and that there are all these statistics about them being dangerous dogs.”

John: “Yeah, that is why I’m leaving before your monster kills my dog.”

Sally: “But look at how sweet my pit bull Lady Buttercup is—she has never hurt anyone. So, all that bull about pit bulls being aggressive is just that: bull.”

Example #3

Bill: “Hey Sally, you look a bit under the weather.”

Sally: “Yeah, I think I’m getting a cold. In the summer. In Florida. This sucks.”

Bill: “My dad and I almost never get colds. You should do what we do.”

Sally: “What is that?”

Bill: “Drink red wine with every meal. My dad said that is the secret to avoiding colds. When I got old enough to buy wine, I started doing it.”

Sally: “Every meal? Even breakfast?”

Bill: “Yes.”

Sally: “Red wine goes with donuts?”

Bill: “It pairs perfectly.”

Ted: “That is baloney. I know a guy who did that and he had colds all the time. Now, this other guy told me that having a slice of cheese with every meal keeps the colds away. I never saw him so much as sniffle.”

Sally: “Why not just have wine and cheese every meal?”

Example #4

Fred: “You are wasting time studying.”

George: “What? Why aren’t you studying? The test is going to be hard.”

Fred: “No need.”

George: “You’re not going to cheat, are you?”

Fred: “No, of course not! But I heard about this woman, Keisha. She aced the last test. She went to the movies and forgot to study. So, I’m going with the Keisha Method—I just need to pick a movie and my A is assured.”

Example #5

Tucker: “Did you hear that story about the immigrant who killed that student?”

Sally: “I did. Terrible.”

Tucker: “So, I bet you’ll change your stance on immigration. After all, they are coming here to commit crimes and endangering Americans.”

Sally: “The statistics show otherwise.”

Tucker: “That is your opinion. That murder shows otherwise.”

Example #5

Sally: “Did you hear that story about the immigrant who saved ten Americans and is now attending medical school and law school at the same time?”

Tucker: “I did. Impressive.”

Sally: “So, I bet you’ll change your stance on immigration. After all, they are amazing people who will do great things.”

 

This is from my book 110 Fallacies.

Also Known as: Misuse of Authority, Irrelevant Authority, Questionable Authority, Inappropriate Authority, Ad Verecundiam

Description:

The fallacious Appeal to Authority is a fallacy of standards rather than a structural fallacy. A fallacious Appeal to Authority has the same form as a strong Argument from Authority. As such, determining when this fallacy occurs is a matter of assessing an Argument from Authority to see if it meets the standards presented below. The general form of the reasoning is as follows:

 

Premise 1: Person A is (claimed to be) an authority on subject S.

Premise 2: Person A makes claim C about subject S.

Conclusion: Therefore, C is true.

 

This reasoning is fallacious when person A is not qualified to make reliable claims in subject S. In such cases the reasoning is flawed because the fact that an unqualified person makes a claim does not provide any justification for the claim. The claim could be true, but the fact that an unqualified person made the claim does not provide any rational reason to accept the claim as true.

When a person falls prey to this fallacy, they are accepting a claim as true without having adequate evidence. More specifically, the person is accepting the claim because they erroneously believe the person making the claim is an expert. Since people tend to believe people they think are authorities this fallacy is common one.

Since this sort of reasoning is fallacious only when the person is not a legitimate authority in a particular context, it is necessary to provide the standards/criteria for assessing the strength of this argument. The following standards provide a guide to such an assessment:

 

  1. The person has sufficient expertise in the subject matter in question.

Claims made by a person who lacks the needed degree of expertise to make a reliable claim are not well supported. In contrast, claims made by a person with the needed expertise will be supported by the person’s competence in the area.

Determining whether a person has the needed degree of expertise can be very difficult. In academic fields (such as philosophy, engineering, and chemistry), a person’s formal education, academic performance, publications, membership in professional societies, papers presented, awards won and so forth can all be reliable indicators of expertise. Outside of academic fields, other standards will apply. For example, having sufficient expertise to make a reliable claim about how to tie a shoelace only requires the ability to tie the shoelace. Being an expert does not always require having a university degree. Many people have high degrees of expertise in sophisticated subjects without having ever attended a university. Further, it should not be assumed that a person with a degree must be an expert.

What is required to be an expert is often a matter of debate. For example, some people claim expertise because of a divine inspiration or a special gift. The followers of such people accept such credentials as establishing the person’s expertise while others often see these self-proclaimed experts as deluded or even as charlatans. In other situations, people debate rationally over what sort of education and experience is needed to be an expert. Thus, what one person may take to be a fallacious appeal another person might take to be a well-supported line of reasoning.

  1. The claim being made by the person is within their area(s) of expertise.

A person making a claim outside of their area(s) of expertise should not be considered an expert in that area. So, that claim is not backed expertise and should not be accepted based on an Appeal to Authority.

Because of the vast scope of human knowledge, it is impossible for a person to be an expert on everything or even many things. So, an expert will only be an expert in certain subject areas. In most other areas they will have little or no expertise. Thus, it is important to determine what subject a claim falls under.

Expertise in one area does not automatically confer expertise in another area, even if they are related. For example, being an expert physicist does not make a person an expert on morality or politics. Unfortunately, this is often overlooked or intentionally ignored. In fact, advertising often rests on a violation of this condition. Famous actors and sports heroes often endorse products that they are not qualified to assess. For example, a person may be a famous actor, but that does not automatically make them an expert on cars or reverse mortgages.

  1. There is an adequate degree of agreement among the other experts in the subject in question.

If there is significant legitimate dispute between qualified experts, then it will be fallacious to make an Appeal to Authority using the disputing experts. This is because for almost any claim being made by one expert there will be a counterclaim made by another expert. In such cases an Appeal to Authority would tend to be futile. In such cases, the dispute must be settled by consideration of the issues under dispute. Since all sides in such a dispute can invoke qualified experts, the dispute cannot be rationally settled by an Argument from Authority.

There are many fields in which there is significant reasonable dispute. Economics, ethics, and law are all good examples of such disputed fields. For example, trying to settle an ethical issue by appealing to the expertise of one ethicist can easily be countered by pointing to an equally qualified expert who disagrees.

No field has complete agreement, and some degree of dispute is acceptable. How much is acceptable is, of course, a matter of debate. Even a field with a great deal of dispute might contain areas of significant agreement. In such cases, an Argument from Authority could be a good argument. For example, while philosophers disagree on most things, there is a consensus among the experts about basic logic. As such, appealing to the authority of an expert on logic in a matter of logic would generally be a strong Argument from Authority.

When it comes to claims that most of the qualified experts agree on, the rational thing for a non-expert to do is to accept that the claim is probably true. After all, a non-expert is not qualified to settle to question of which experts are correct and the majority of qualified experts is more likely to be right than the numerical minority. Non-experts often commit this fallacy because they wrongly think that because they prefer the claim of the minority of experts, it follows that those experts must be right.

  1. The person in question is not significantly biased.

If an expert is significantly biased, then the claims they makes will be less credible. So, an Argument from Authority based on a biased expert will tend to be fallacious. This is because the evidence will usually not justify accepting the claim.

Experts, being people, are vulnerable to biases and prejudices. If there is evidence that a person is biased in some manner that would affect the reliability of their claims, then an Argument from Authority based on that person is likely to be fallacious. Even if the claim is true, the fact that the expert is biased weakens the argument. This is because there would be reason to believe that the expert might not be making the claim because they have carefully considered it using their expertise. Rather, there would be reason to believe that the claim is being made because of the expert’s bias or prejudice.

No person is completely objective. At the very least, a person will be favorable towards their own views (otherwise they would not hold them). Because of this, some degree of bias must be accepted, provided it is not significant. What counts as a significant degree of bias is open to dispute and can vary a great deal from case to case. For example, many people would probably suspect that doctors who were paid by tobacco companies to research the effects of smoking would be biased while other people might believe (or claim) that they would be able to remain objective.

  1. The area of expertise is a legitimate area or discipline.

Certain areas in which a person may claim expertise may have no legitimacy or validity as areas of knowledge. Obviously, claims made in such areas tend to lack credibility.

What counts as a legitimate area of expertise can be difficult to determine. However, there are cases which are clear cut. For example, if a person claimed to be an expert at something they called “chromabullet therapy” and asserted that firing painted rifle bullets at a person would cure cancer it would not be unreasonable to accept their claim based on their “expertise.” After all, their expertise is in an area which has no legitimate content. The general idea is that to be a legitimate expert a person must have mastery over a real field or area of knowledge.

As noted above, determining the legitimacy of a field can often be difficult. In European history, various scientists had to struggle with the Church and established traditions to establish the validity of their disciplines. For example, experts on evolution faced an uphill battle in getting the legitimacy of their area accepted.

A modern example involves psychic phenomenon. Some people claim that they are certified “master psychics” and that they are experts in the field. Other people contend that their claims of being certified “master psychics” are simply absurd since there is no real content to such an area of expertise. If these people are right, then anyone who accepts the claims of these “master psychics” are victims of a fallacious Appeal to Authority.

  1. The authority in question must be identified.

A common variation of the typical Appeal to Authority fallacy is an Appeal to an Unnamed Authority. This fallacy is Also Known as an Appeal to an Unidentified Authority.

This fallacy is committed when a person asserts that a claim is true because an expert or authority makes the claim, but the person does not identify the expert. Since the expert is not identified, there is no way to tell if the person is an expert. Unless the person is identified and has his expertise established, there is no reason to accept the claim on this basis.

This sort of reasoning is not unusual. Typically, the person making the argument will say things like “I have a book that says…”, or “they say…”, or “the experts say…”, or “scientists believe that…”, or “I read in the paper..” or “I saw on TV…” or some similar statement. in such cases the person is often hoping that the listener(s) will simply accept the unidentified source as a legitimate authority and believe the claim being made. If a person accepts the claim simply because they accept the unidentified source as an expert (without good reason to do so), he has fallen prey to this fallacy.

 

Non-Fallacious Arguments from Authority

Not all Arguments from Authority are fallacious. This is fortunate since people must rely on experts. No one person can be an expert on everything, and people do not have the time or ability to investigate every single claim themselves.

In some cases, Arguments from Authority will be good arguments. For example, when a person goes to a skilled doctor and the doctor tells them that they have a cold, then the patient has good reason to accept the doctor’s conclusion. As another example, if a person’s computer is acting odd and their friend, who is a computer expert, tells them it is probably their hard drive then they have good reason to accept this claim.

What distinguishes a fallacious Appeal to Authority from a good Argument from Authority is that the argument effectively meets the six conditions discussed above.

In a good Argument from Authority, there is reason to believe the claim because the expert says the claim is true. This is because a qualified expert is more likely to be right than wrong when making claims within their area of expertise. In a sense, the claim is being accepted because it is reasonable to believe that the expert has tested the claim and found it to be reliable. So, if the expert has found it to be reliable, then it is reasonable to accept it as being true. Thus, the listener is accepting a claim based on the testimony of the expert.

It should be noted that even a good Argument from Authority is not an exceptionally strong argument. After all, a claim is accepted as true because a credible person says it is true. Arguments that deal directly with evidence relating to the claim itself will tend to be stronger.

 

Defense: The main defense against this fallacy is to apply the standards of the Argument from Authority when considering any appeal to authority important enough to be worth assessing. You should especially be on guard when you agree with the (alleged) expert and want to believe they are correct. While there are legitimate uses for claims by anonymous experts, the credibility of these claims rest on the expertise of the person reporting the claim. This is because the evidence for such a claim is the credibility and expertise of the person reporting it. That is, you are trusting that they are honestly reporting the claim and are qualified to assess that the anonymous expert is credible.

Example #1:

Bill: “I believe that abortion is morally acceptable. After all, a woman should have a right to her own body.”

Jane: ‘I disagree completely. Dr. Johan Skarn says that abortion is always morally wrong, regardless of the situation. He must be right, after all, he is a respected expert in his field.”

Bill: “I’ve never heard of Dr. Skarn. Who is he?”

Jane: “He’s that guy that won the Nobel Prize in physics for his work on cold fusion.”

Bill: “I see. Does he have any expertise in morality or ethics?”

Jane: “I don’t know. But he’s a world-famous expert, so I believe him.”

Example #2:

Kintaro: “I don’t see how you can consider Stalin to be a great leader. He killed millions of his own people, he crippled the Soviet economy, kept most of the people in fear and laid the foundations for the violence that is occurring in much of Eastern Europe.”

Dave: “Yeah, well you say that. However, I have a book at home that says that Stalin was acting in the best interest of the people. The millions that were killed were vicious enemies of the state and they had to be killed to protect the rest of the peaceful citizens. This book lays it all out, so it must be true.”

Example #3:

Actor: “I’m not a doctor, but I play one on the hit series ‘Bimbos and Studmuffins in the OR.’ You can take it from me that when you need a fast acting, effective and safe pain killer there is nothing better than MorphiDope 2000. That is my considered medical opinion.”

Example #4:

Sasha: “I played the lottery today and I know I am going to win something.”

Siphwe: “What did you do, rig the outcome?”

Sasha: “No, silly. I called my Super Psychic Buddy at the 1-900-MindPower number. After consulting his magic Californian Tarot deck, he told me my lucky numbers.”

Siphwe: “And you believed him?”

Sasha: “Certainly, he is a certified Californian Master-Mind Psychic. That is why I believe what he has to say. I mean, like, who else would know what my lucky numbers are?”

Example #5

Sam: “I’m going to get the Shingles vaccine based on my doctor’s advice.”

Ted: “Well, I saw this guy on YouTube who says that the vaccine has microchips in it. And that it causes autism.”

Sam: “Are they are doctor or scientist?”

Ted: “Well, I think he was a doctor once. He said something about getting his medical license revoked because They are out to get him and want to silence him.”

Sam: “Does he have any evidence for these claims?”

Ted: “Look, you can believe your doctor if you want, but don’t come crying to me when the microchips take over your brain and you catch autism.”

Sam: “You don’t catch autism.”

Ted: “Whatever.”

 

Description:

This fallacy occurs when someone uncritically rejects a prediction or the effectiveness of the responses to it when the predicted outcome does not occur:

Premise 1: Prediction P predicted outcome X if response R is not taken.

Premise 2: Response R was taken (based on prediction P).

Premise 3: X did not happen, so Prediction P was wrong.

Conclusion: Response R should not have been taken (or there is no longer a need to take Response R).

 

The error occurs because of a failure to consider the obvious: if there is an effective response to a predicted outcome, then the prediction will appear to be “wrong” because the predicted outcome will not occur.

While a prediction that turns out to be “wrong” is technically wrong, the error here is to uncritically conclude that this proves the response was not needed (or there is no longer any need to keep responding). The initial prediction assumes there will not be a response and is usually made to argue for responding. If the response is effective, then the predicted outcome will not occur, which is the point of responding. To reason that the “failure” of the prediction shows that the response was mistaken or no longer needed is thus a mistake in reasoning.

To use a silly analogy, imagine that we are in a car and driving towards a cliff. You make the prediction that if we keep going, we will go off the cliff and die. So, I turn the wheel and avoid the cliff. If backseat Billy gets angry and says that there was no reason to turn the wheel or that I should turn it back because we did not die in a fiery explosion, Billy is falling for this fallacy. After all, if we did not turn, then we would have probably died. And if we turn back too soon, then we will probably die. The point of turning is so that the predicted outcome of death will not occur.

A variation on this fallacy involves inferring the prediction was bad because it turned out to be “wrong”:

Premise 1: Prediction P predicted outcome X if response R is not taken.

Premise 2: Response R was taken based on prediction P.

Premise 3: X did not happen.

Conclusion: Prediction P was wrong about X occurring if response R was not taken.

 

While the prediction would be “wrong” in that the predicted outcome did not occur, this does not disprove the prediction that X would occur without the response. Going back to the car example, the prediction that we would die if we drove of the cliff if we do not turn is not disproven if we turn and then do not die. In fact, that is the result we want.

Since it lacks logical force, this fallacy gains its power from psychological force. Sorting out why something did not happen can be difficult and it is easier to go along with biases, preconceptions, and ideology than it is to sort out a complicated matter.

This fallacy can be committed in good faith out of ignorance. When committed in bad faith, the person using it is aware of the fallacy. The intent is often to use this fallacy to argue against continuing the response or as a bad faith attack on those who implemented or argued for the response. For example, someone might argue in bad faith that a tax cut was not needed to avoid a recession because the predicted recession did not occur after the tax cut. While the tax cut might have not been a factor, simply asserting that they were not needed because the recession did not occur would commit this fallacy.

 

Defense: To avoid inflicting this fallacy on yourself or falling for it, the main defense is to keep in mind that a prediction based on the assumption that a response will not be taken can turn out to be “wrong” if that response is taken. Also, you should remember that the failure of a predicted event to occur after a response is made to prevent it would count as some evidence that the response was effective rather than as proof it was not needed. But care should be taken to avoid uncritically inferring that the response was needed or effective because the predicted event did not occur.

 

Example #1

Julie: “The doctor said that my blood pressure would keep going up unless I improved my diet and started exercising.”

Kendra: “How is your blood pressure now?”

Julie: “Pretty good. I guess I don’t need to keep eating all those vegetables and I can stop going on those walks.”

Kendra: “Why?”

Julie: “Well, she was wrong. My blood pressure did not go up.”

Example #2

Robert: “While minority voters might have needed some protection long ago, I am confident we can remove all those outdated safeguards.”

Kelly: “Why? Aren’t they still needed? Aren’t they what is keeping some states from returning to the days of Jim Crow?”

Robert: “Certainly not. People predicted that would happen, but it didn’t. So, we obviously no longer need those protections in place.”

Kelly: “But, again, aren’t these protections what is keeping that from happening?”

Robert: “Nonsense. Everything will be fine.”

Example #3

Lulu: “I am so mad. We did all this quarantining, masking, shutting down, social distance and other dumb thing for so long and it is obvious we did not need to.”

Paula: “I didn’t like any of that either, but the health professionals say it saved a lot of lives.”

Lulu: “Yeah, those health professionals said that millions of people would die if we didn’t do all that stupid stuff. But look, we didn’t have millions die. So, all that was just a waste.”

Paula: “Maybe doing all that was why more people didn’t die.”

Lulu: “That is what they want you to think.”

 

Since I often reference various fallacies in blog posts I decided to also post the fallacies. These are from my book 110 Fallacies.

Description:

This fallacy is committed when a person places unwarranted confidence in drawing a conclusion from statistics that are unknown.

 

Premise 1: “Unknown” statistical data D is presented.

Conclusion: Claim C is drawn from D with greater confidence than D warrants.

 

Unknown statistical data is just that, statistical data that is unknown. This data is different from “data” that is simply made up because it has at least some foundation.

One type of unknown statistical data is when educated guesses are made based on limited available data. For example, when experts estimate the number of people who use illegal drugs, they are making an educated guess. As another example, when the number of total deaths in any war is reported, it is (at best) an educated guess because no one knows for sure exactly how many people have been killed.

Another common type of unknown statistical data is when it can only be gathered in ways that are likely to result in incomplete or inaccurate data. For example, statistical data about the number of people who have affairs is likely to be in this category. This is because people generally try to conceal their affairs.

Obviously, unknown statistical data is not good data.  But drawing an inference from unknown data need not always be unreasonable or fallacious. This is because the error in the fallacy is being more confident in the conclusion than the unknown data warrants. If the confidence in the conclusion is proportional to the support provided by the evdience, then no fallacy would be committed.

For example, while the exact number of people killed during the war in Afghanistan will remain unknown, it is reasonable to infer from the known data that many people have died. As another example, while the exact number of people who do not pay their taxes is unknown, it is reasonable to infer that the government is losing some revenue because of this.

The error that makes this a fallacy is to place too much confidence in a conclusion drawn from unknown data. Or to be a bit more technical, to overestimate the strength of the argument based on statistical data that is not adequately known.

This is an error of reasoning because, obviously enough, a conclusion is being drawn that is not adequately justified by the premises. This fallacy can be committed in ignorance or intentionally committed.

Naturally, the way in which the statistical data is gathered also needs to be assessed to determine whether other errors have occurred, but that is another matter.

 

Defense: The main defense against this fallacy is to keep in mind that inferences drawn from unknown statistics need to be proportional to the quality of the evidence. The error, as noted above, is placing too much confidence in unknown statistics.

Sorting out exactly how much confidence can be placed in such statistics can be difficult, but it is wise to be wary of any such reasoning. This is especially true when the unknown statistics are being used by someone who is likely to be biased. That said, to simply reject claims because they are based on unknown statistics would also be an error.

 

Example #1

“Several American Muslims are known to be terrorists or at least terrorist supporters. As such, I estimate that there are hundreds of actual and thousands of potential Muslim-American terrorists. Based on this, I am certain that we are in grave danger from this large number of enemies within our own borders.”

Example #2

“Experts estimate that there are about 11 million illegal immigrants in the United States. While some people are not worried about this, consider the fact that the experts estimate that illegals make up about 5% of the total work force. This explains that percentage of American unemployment since these illegals are certainly stealing 5% of America’s jobs. Probably even more, since these lazy illegals often work multiple jobs.”

Example #3

Sally: “I just read an article about cheating.”

Jane: “How to do it?”

Sally: “No! It was about the number of men who cheat.”

Sasha: “So, what did it say?”

Sally: “Well, the author estimated that 40% of men cheat.”

Kelly: “Hmm, there are five of us here.”

Janet: “You know what that means…”

Sally: “Yes, two of our boyfriends are cheating on us. I always thought Bill and Sam had that look…”

Janet: “Hey! Bill would never cheat on me! I bet it is your man. He is always given me the eye!”

Sally: ‘What! I’ll kill him!”

Janet: “Calm down. I was just kidding. I mean, how can they know that 40% of men cheat? I’m sure none of the boys are cheating on us. Well, except maybe Sally’s man.”

Sally: “Hey!”

Example #4

“We can be sure that most, if not all, rich people cheat on their taxes. After all, the IRS has data showing that some rich people have been caught doing so. Not paying their fair share is exactly what the selfish rich would do.”