Photo by ubahnverleih

As discussed in the previous essays on this subject, conspiracy theorists often use the methods of critical thinking to support and defend their theories. One method, which is a core component of scientific reasoning, is the inference to the best explanation. As the name suggests, this reasoning aims at finding the best explanation and this typically involves pitting competing explanations against each other until the best emerges.

This reasoning can be seen as a version of the argument by elimination. This argument has two basic forms. One version is the extermination method in which the goal is to show that something cannot be the case. The idea is to present all possible options, refute all of them and then conclude a total elimination. As an example, Kant used this method to argue that the existence of God cannot be proven (and that it could not be disproven). His reasoning was as follows:

 

Premise 1: There are only three possible ways to prove the existence of God: the teleological argument, the ontological argument and the cosmological argument.

Premise 2: None of these arguments can succeed in proving the existence of God.

Conclusion: There is no way to prove the existence of God.

 

While this is a valid deductive argument (if all the premises are true, then the conclusion must be true), showing that it is also sound (valid plus all true premises) is the real challenge. Doing so requires showing that there are only three ways to prove God’s existence and that they all must fail.

Since this method aims at total elimination, it is only useful in this context when trying to argue that no explanation is possible.

The second version is like a marathon: the competition runs until one victor emerges from the pack. In its simplest form (which has but two options), it can be presented as a disjunctive syllogism:

 

Premise 1:  P or Q

Premise 2: Not Q

Conclusion: P

 

It can also be expanded to include potentially infinite options:

 

Premise 1: P or Q or R or …

Premise 2: Not Q and Not R and Not…

Conclusion: P

 

This sort of reasoning is often used in mystery/crime stories: if there are only five possible suspects and one of them did it, then elimination four of them will reveal the culprit.  This presentation can be misleading, however. While the logic is valid, to avoid committing the fallacy of false dilemma it must be the case that the two (or more) options that are presented are the only viable options. To the degree that other options remain a possibility, the truth of the first premises remains in doubt.

Conspiracy theorists (and many others) sometimes make the mistake of falling into a false dilemma when they claim that their refutation of their main competitor(s) proves their theory. For example, a flat-earther might reason like this:

 

Premise 1: The earth is flat or the earth is a sphere.

Premise 2: The earth is not a sphere.

Conclusion: The earth is flat.

 

The obvious problem is that while the best-known earth shapes are spherical and flat, this does not entail that those are the only options. There are, after all, many other shapes in geometry and the flat option only wins by elimination when all those shapes have also been eliminated.

It is at this point that a skeptic can argue that one can never be sure that all the options have been considered, so one can never know that the right explanation has been found. After all, the skeptic can say, the right explanation might not even be in the competition. This fact is sometimes used by conspiracy theorists to cast doubts on an accepted explanation. This explanation might be the best among the known explanations but is not the true explanation. Unfortunately for the conspiracy theorist, this same doubt also applies to their conspiracy theory, so they need more than skepticism to support their own explanation.

While the skeptic might be right about the impossibility of certainty, it is still possible to hold the competition between the known explanations while keeping in mind that alternatives have been missed. But the mere fact that there could be missed alternatives does not itself show that a good explanation has not been found. To use an analogy, think of a career. While there might be a better match for a person out there, this does not entail that their current career is not a good one (or even the best). After all, the career can be assessed by various standards and against the known alternatives. The same holds for explanations. So, while the possibility of unknown explanations should be kept in mind, their mere possibility should not be taken as refuting an explanation.

 The second challenge is that of establishing the second premise—eliminating the competition.

To the degree that the elimination of the other explanations is in doubt, the truth of the second premise remains in doubt. This leads to the matter of how explanations compete, which is the subject of the next essay in this series.

 

As noted in the previous essays in the series, people who believe in conspiracy theories can use good methods of argumentation to establish their claims. As such, it would be an error to simply dismiss such folks as automatically being irrational or illogical. In this essay I will briefly look at how the argument by example can be used to support a conspiracy theory and how to assess such reasoning to avoid accepting fallacies.

An argument by example is, obviously enough, when one tries to support a conclusion by presenting examples. It has the following form, although people generally present it informally:

 

           Premise 1: Example 1 is an example that supports claim P.

           Premise n: Example n is an example that supports claim P.

           Conclusion: Claim P is true.

 

In this case n is a variable standing for the number of the premise in question and P is a variable standing for the claim under consideration. To use a non-conspiracy example, a politician might argue that they are competent in foreign policy by giving examples of their success in this area.

There are many ways this argument can be used in conspiracy theories. One is to argue for the existence of conspiracies in general by providing examples that purport to show that conspiracies do occur. For example, a Flat Earther might try to prove that it is reasonable to believe that supposedly proven science can be a hoax or conspiracy by giving examples of such occurrences (such as the Piltdown Man hoax).

While this approach is a legitimate use of the argument, to conclude from establishing the general claim that there have been conspiracies to a specific conspiracy theory being true is bad logic. To use an analogy, consider counterfeit art. It is easy to find many examples of counterfeit art, and this supports the conclusion that art has been counterfeited. But it would not follow that a specific work of art, such as the Mona Lisa, was a counterfeit.

 The second method is to argue for a specific conspiracy theory by presenting examples that support the theory. For example, someone who believes the Illuminati run the world could present examples of what they think is the Illuminati in action and conclude their theory is correct. The question is whether the examples adequately support the conclusion, and this concern leads to the standards used to assess this argument.

First, the more examples, the stronger the argument. Second, the more relevant the examples, the stronger the argument. Using the Illuminati example as an illustration, the question would be whether the examples provide evidence of the Illuminati. As should be suspected, this is where the main dispute would occur. The person arguing that the Illuminati is real would be seen by their critics as seeing things that are not there, while the proponent of the theory would think their critics blind.

Third, the examples must be specific and clearly identified. Vague and unidentified examples provide little. Conspiracy theories are often supported by vague and unidentified examples, but sometimes they are precise and clearly identified. For example, the Illuminati theorist might point to a detailed and documented account of UN activities they see as example of the Illuminati influence.

 Fourth, counterexamples must be considered. A counterexample is an example that counts against the claim. One way to look at a counter example is that it is an example that supports the denial of the conclusion being argued for. The more counterexamples and the more relevant they are, the weaker the argument. In the case of the Illuminati example, counterexamples could be cases that would tell against Illuminati control. One common failing of conspiracy theories is that counterexamples are ignored or downplayed. As would be imagined, this can lead to a battle over whether the supposed counterexample is really a counterexample. For example, if someone claims the world is ruled by a super-competent Illuminati, counterexamples would include all the things that arise from poor decision making and ignorance. But, of course, a clever theorist can try to explain away these supposed counterexamples. For example, chaos, wars and economic disasters are not evidence against a global Illuminati, but proof it exists because they are brilliantly causing people to make bad decisions that cause wars and economic disasters.

As such, conspiracy theorists who use the argument by example are not being irrational or illogical. They are using a basic inductive tool. The problem is with how they assess their examples and their failure to give due weight to counterexamples. That said, the battles over the relevance of examples, whether a counterexample is really a counterexample, and the weight given to examples is one that can become very complicated. As such, theorists who are willing to apply the standards and consider criticism should not be simply dismissed. As with the other types of reasoning, conspiracy theorists are using good tools badly

This essay continues the discussion of the logic of conspiracy theories. Conspiracy theorists use the same logical tools as everyone else, but they use them in different ways. In the previous essay I discussed how conspiracy theorists use the argument from authority.  I will now look at the analogical argument.

In an analogical argument you conclude that two things are alike in a certain respect because they are alike in other respects. An analogical argument usually has three premises and a conclusion. The first two premises establish the analogy by showing that the things (X and Y) being compared are similar in certain respects (properties P, Q, R, etc.).  The third premise establishes that X has an additional property, Z. The conclusion asserts that Y has property Z as well. The form looks like this:

 

           Premise 1: X has properties P,Q, and R.

           Premise 2: Y has properties P,Q, and R.

           Premise 3: X has property Z.

           Conclusion: Y has property Z.

 

While one might wonder how reasoning by analogy could lead to accepting a conspiracy theory, it works very well in this role. If property Z is a feature of a conspiracy theory, such as the government harming citizens, then all that is needed to make the argument is something else with that property. Then it is easy to draw the analogy. 

For example, consider an anti-vaxxer who thinks there is a conspiracy to convince people that the unsafe vaccines are safe. They could make an analogical argument comparing vaccines to what happened during the opioid epidemic. This epidemic was caused by pharmaceutical companies lying about the danger of opioids, doctors being bribed to prescribe them, pharmacies going along with the prescriptions, and the state allowing it all to happen. Looked at this way, concluding that what was true of opioids is also true of vaccines can seem reasonable. Yet, the conspiracy theory about vaccines is mistaken. So, how does one sort assess this reasoning and what mistakes do conspiracy theorists make? The answer is that there are three standards for assessing the analogical argument and conspiracy theorists don’t apply them correctly.

First, the more properties X and Y have in common, the better the argument. The more two things are alike in other ways, the more likely it is that they will be alike in a specific way. In the case of vaccines and opioids, there are many shared similarities; for example, both involve companies, doctors, pharmacies and the state.

Second, the more relevant the shared properties are to property Z, the stronger the argument. A specific property, for example P, is relevant to property Z if the presence or absence of P affects the likelihood that Z will be present.  Third, it must be determined whether X and Y have relevant dissimilarities as well as similarities. The more dissimilarities and the more relevant they are, the weaker the argument.

In the case of inferring an unreal conspiracy to sell dangerous vaccines from the very real opioid conspiracy one must weigh the similarities and differences. While there are clearly relevant similarities, there are some crucial differences. Most importantly, vaccines have been extensively tested and are known to be safe. In contrast, all the scientific evidence supports common sense: opioids are addictive and potentially dangerous. While people want to make money off both, this does not entail that vaccines are not safe, even though opioids are dangerous.  While the analogy between the opioid conspiracy and the vaccine conspiracy breaks down; there is nothing wrong with reasoning by analogy. If the standards are applied and relevant differences are considered, this method of reasoning is quite useful.

It is rational for conspiracy theorists to consider real cases of wrongdoing. For example, we know that governments do engage in false flag operations or lie to “justify” wars and violence. But this fact does not prove, by itself, that any specific event is a false flag or a lie. As such, the mistake make by conspiracy theorists is not arguing by analogy, but in not being careful enough in applying the standards. So they commit the fallacy of false analogy.

While details of each conspiracy theory vary, they often attribute great power and influence to a small group engaging in nefarious activities. A classic example is the idea that NASA faked the moon landings. There are also numerous “false flag” conspiracy theories ranging from the idea that the Bush administration was behind 9/11 to the idea that school shootings are faked by anti-gun Democrats. There are also various medical conspiracy theories, such as those fueling the anti-vaccination movement.

There has been considerable research into why people believe in conspiracy theories. A plausible explanation is that anxiety and feeling a loss of control lead to accepting such theories. Ironically, people who embrace conspiracy theories seem less inclined to act against the conspiracy, perhaps because they feel helpless in the face of such imagined power. But there are some exceptions, such as when the conspiracy theory about Hillary Clinton running a slavery operation in a pizzeria led to a concerned citizen shooting up the place.

It is tempting to embrace a stereotype of the conspiracy theorist: someone immune to logic, oblivious to opposing evidence and perhaps suffering from mental illness. To broadly dismiss conspiracy theorists using this stereotype would be an error, though it does apply in some cases. Interestingly, some conspiracy theorists use  the same tools of logic and reasoning employed by critical thinkers and I will endeavor to illustrate this in a series of essays.

Since the world is a complicated place and is beyond the understanding of any one person, we often turn to experts. For example, most of us lack the time and resources to investigate immigration, so we must rely on experts. Accepting such claims based on the (alleged) expertise of the person making the claim is to use an argument from authority. This argument has the following form:

 

Premise 1: Person A is (claimed to be) an authority on subject S.

Premise 2: Person A makes claim C about subject S.

Conclusion: Therefore, C is true.

 

This reasoning is inductive (the premises provide a degree of support for the conclusion that is less than complete) and its strength depends on the quality of the authority making the claim. If the authority is qualified to make reliable claims in the subject area, then the argument would be a good one. For example, believing that this is what an argument from authority is because of my expertise as a philosophy professor who has taught critical thinking since 1989 would be good reasoning.  If the alleged authority is not qualified to make reliable claims in the subject area, then the argument would be a fallacious appeal to authority because the premises would not adequately support the conclusion. For example, if you believed what I said about quantum theory because of my alleged expertise, then you would fall victim to this fallacy because my expertise in philosophy does not confer expertise in quantum theory.

Most people who rationally believe any theory believe it based on an argument from authority, the exceptions are those who are the experts. For example, most of us believe in the theory of relativity because of Einstein, not because we have done scientific research.  In the case of conspiracy theories, believers often use an argument from authority: they believe the theory because an (alleged) expert told them it is true. For example, those who accept the anti-vaccination theory often refer to the debunked paper claiming a causal link between vaccines and autism or they believe because a celebrity tells them vaccines are dangerous. As such, for almost everyone the reasoned belief in a theory is the result of an argument from authority. So, then, what is the difference between the conspiracy theorist who believes that vaccines are dangerous because of what a celebrity says and a person who accepts relativity because of what Einstein said?

The difference, in general, is that conspiracy theorists fall for fallacious arguments from authority as opposed to accepting good arguments from authority. For example, believing that vaccines cause autism because of a debunked paper or because of what an actor says would be to fall for this fallacy. After all, unless the actor is also a medical expert on vaccines what they say about vaccines has no logical weight.

Resisting fallacious arguments from authority can be challenging, especially when the alleged authority is appealing, or the view being presented is what one wants to believe. However, there are standards by which to assess an argument from authority. To be a good argument, it must be such that:

 

  1. The person has sufficient expertise in the subject matter in question.
  2. The claim being made by the person is within her area(s) of expertise.
  3. There is an adequate degree of agreement among the other experts in the subject in question.
  4. The person in question is not significantly biased.
  5. The area of expertise is a legitimate area or discipline.
  6. The authority in question must be identified.

 

If all these conditions are met, then the conclusion is probably true. However, since the argument from authority is inductive it suffers from the classic problem of induction: even if all the premises are true, the conclusion could still turn out to be false. So, conspiracy theorists who believe “experts” are using the same argument as good reasoners, they are just using the fallacious version.

The question “why lie if the truth would suffice” can be interpreted in at least three ways. One is as an inquiry about the motivation and asks for an explanation. A second is as an inquiry about weighing the advantages and disadvantages of lying. The third way is as a rhetorical question that states, under the guise of inquiry, that one should not lie if the truth would suffice.

Since a general discussion of this question would be rather abstract, I will focus on a specific example and use it as the basis for the discussion. Readers should, of course, construct their own examples using their favorite lie from those they disagree with. I will use Trump’s response to the Democrats’ Green New Deal as my example. While this is something of a flashback to his first term, Trump recently signed an executive order targeting the old Green New Deal.

In 2019 the Democrats proposed a Green New Deal aimed at addressing climate change and economic issues. As with any proposal, rational criticisms can be raised against it. In his first term, Trump claimed the Democrats intend “to permanently eliminate all Planes, Cars, Cows, Oil, Gas & the Military – even if no other country would do the same.”  While there are some Democrats who would do these things, the Democratic Party favors none of that. Looked at rationally, it seems to make no sense to lie about the Green New Deal. If it is bad enough to reject on its own defects, lies would not be needed. If one must lie to attack it, this suggests a lack of arguments against it. To use an analogy, if a prosecutor lies to convict a person, this suggests they have no case—otherwise they would rely on evidence. So, why would Trump lie if the truth would suffice to show the Green New Deal is a terrible plan?

The question of why Trump (or anyone else) lies when the truth would suffice is a matter for psychology, not philosophy. So, I will leave that question to others. This leaves me with the question about the advantages and disadvantages of lying along with the rhetorical question.

The lie about the Green New Deal is a good example of hyperbole and a straw man. Trump himself claims to use the tactic of “truthful hyperbole”. Hyperbole is a rhetorical device in which one makes use of extravagant overstatement, such as claiming that the Democrats plan to eliminate cows. The reason hyperbole is not just called lying is because it is a specific type of untruth and must have a foundation in truth. Hyperbole involves inflating or exaggerating something true rather than a complete fiction. The Green New Deal is aimed at making America carbon neutral and this would impact cars, cows, planes, oil, gas and the military. The extravagant exaggeration is that the proposal would eliminate all of them permanently. This would be as if someone proposed cutting back on dessert at family dinners and they were accused of wanting to eliminate meals permanently. Since hyperbole is rhetoric without logic, it has no logical force and does not prove (or disprove) anything. But it can have considerable psychological force in influencing people to believe a claim.

Hyperbole is often used in conjunction with the Straw Man fallacy. This fallacy is committed when a person’s actual position is ignored and a distorted, exaggerated or misrepresented version of that position is criticized in its place. This sort of “reasoning” has the following pattern:

 

Premise 1: Person A has position X.

Premise 2: Person B presents position Y (a distorted version of X).

Premise 3: Person B attacks position Y.

Conclusion: Therefore, X is false or bad.

 

This sort of “reasoning” is fallacious because attacking a distorted version of a position is not a criticism of the actual position. One might as well expect an attack on a poor drawing of a person to hurt the person.

Like hyperbole, the Straw Man fallacy is not based on a simple lie: it involves an exaggeration or distortion of something true. In the case of Trump and the Green New Deal, his “reasoning” is that the Green New Deal should be rejected because his hyperbolic straw man version of it is terrible. Since this is a fallacy, his “reasons” do not support his claim. It is, as always, important to note that Trump could be right about the Green New Deal being a bad idea, but not for the “reasons” he gives. To infer that a fallacy must have a false conclusion is itself a fallacy (the fallacy fallacy).

While hyperbole has no logical force and a straw man is a fallacy, there are advantages to using them. One advantage is that they are much easier than coming up with good reasons. Criticizing the Green New Deal for what it is requires knowing what it is and considering possible defects which take time and effort. Tweeting out a straw man takes seconds.

The second advantage is that hyperbole and straw men often work, often much better than the truth. In the case of complex matters, people rarely do their homework and do not know that a straw man is a straw man. I have interacted with people who honestly think Democrats plan to eliminate planes and cars. Since this is a bad idea, they reject it, not realizing that is not the Green New Deal. An obvious defense against hyperbole and straw man is to know the truth. While this can take time and effort, someone who has the time to post on Facebook or Twitter, has the time to do basic fact checking. If not, their ignorance should command them to remain silent, though they have the right to express their unsupported views.

As far as working better than the truth, hyperbole or straw man appeals to the target’s fears, anger or hope. They are thus motivated to believe in ways that truth cannot match. People generally find rational argumentation dull and unmoving, especially about complicated issues.  If Trump honestly presented real problems with the Green New Deal, complete with supporting data and graphs, he would bore most and lose his audience. By using a straw man, he better achieves his goal. This does allow for a pragmatic argument for lying because the truth will not suffice.

If telling the truth would not suffice to convince people, then there is the pragmatic argument that if lying would do the job, then it should be used. For example, if going into an honest assessment of the  Green New Deal would bore people and lying would get the job done, then Trump should lie if he wants to achieve his goal. This does, however, raise  moral concerns.

If the reason the truth would not suffice is because it does not logically support the claim, then it would be immoral to lie. To use a non-political example, if you would not invest in my new fake company iScam if you knew it was a scam, getting you to invest in it by lying would be wrong. So, if the New Green Deal would not be refuted by the truth, Trump’s lies about it would be immoral.

But, what about cases in which the truth would logically support a claim, but the truth would not persuade people to accept that claim? Going back to the Green New Deal example, suppose it is terrible but explaining its defects would bore people and they would remain unpersuaded. But a straw man version of the Green New Deal would persuade many people to reject this hypothetically terrible plan? From a utilitarian standpoint, the lie could be morally justified; if the good of lying outweighed the harm, then it would be the right thing to do. To use an analogy, suppose you were trying to convince a friend to not start a dangerous diet. You have scientific data and good arguments, but you know your friend is bored by data and is largely immune to logic. So, telling them the truth would mean that they would go on the diet and harm themself. But, if you exaggerate the harm dramatically, your friend will abandon the diet. In such a case, the straw man argument would seem to be morally justified as you are using it to protect your friend.

While this might seem to justify the general use of hyperbole and the straw man, it only justifies their use when the truth does suffice logically but does not suffice in terms of persuasive power. That is, the fallacy is only justified as a persuasive device when there are non-fallacious arguments that would properly establish the same conclusion.

Reasoning is like chainsaw: useful when used properly, but when used badly it can create a bloody mess. While this analogy can be applied broadly to logic, this essay focuses on the inductive generalization and how it can become a wayward chainsaw under the influence of fear. I’ll begin by looking at our good friend the inductive generalization.

Consisting of a premise and a conclusion, the inductive generalization is a simple argument:

 

Premise 1: P% of observed Xs are Ys.

Conclusion: P% of all Xs are Ys.

 

The quality of an inductive generalization depends on the quality of the first premise, which is usually called the sample. The larger and more representative the sample, the stronger the argument (the more likely it is that the conclusion will be true if the premise is true). There are two main ways in which an inductive generalization can be flawed. The first is when the sample is too small to adequately support the conclusion. For example, a person might have a run-in with a single bad driver from Ohio and conclude all Ohio drivers are terrible. This is known as the fallacy of hasty generalization.

The second is when there is a biased sample, one that does not represent the target population. For example, concluding that most people are Christians because everyone at a Christian church is a Christian would be a fallacy. This is known as the fallacy of biased generalization.

While these two fallacies are well known, it is worth considering them in the context of fear: the fearful generalization. On the one hand, it is not new: a fearful generalization is a hasty generalization or a biased generalization. On the other hand, the hallmark of a fearful generalization (that is fueled by fear) makes it worth considering, especially since addressing the fueling fear seems to be key to disarming this sort of poor reasoning.

While a fearful generalization is not a new fallacy structurally, it is committed because of the psychological impact of fear. In the case of a hasty fearful generalization, the error is drawing an inference from a sample that is too small, due to fear. For example, a female college student who hears about incidents of sexual harassment on campuses might, from fear, infer that most male students are likely to harass her. As another example, a person who hears about an undocumented migrant who commits a murder might, from fear, infer that many  undocumented migrants are murderers. Psychologically (rather than logically), fear fills out the sample, making it feel like the conclusion is true and adequately supported. However, this is an error in reasoning.

The biased fearful generalization occurs when the inference is based on a sample that is not representative, but this is overlooked due to fear. Psychologically, fear makes the sample feel representative enough to support the conclusion. For example, a person might look at arrest data about migrants and infer that most migrants are guilty of crimes. A strong generalization about what percentage of migrants commits crimes needs to include the entire population, not a sample consisting just of those arrested.

As another example, if someone terrified of guns looks at crime data about arrests involving firearms and infers that most gun owners are criminals, this would be a biased generalization. This is because those arrested for gun crimes do not represent the entire gun-owning population. A good generalization about what percentage of gun-owners commit crimes needs to include the general population, not just those arrested.

When considering any fallacy, there are three things to keep in mind. First, not everything that looks like a fallacy is a fallacy. After all, a good generalization has the same structure as a hasty or biased generalization. Second, concluding a fallacy must have a false conclusion is a fallacy (the fallacy fallacy). So, a biased or hasty generalization could have a true conclusion; but it would not be supported by the generalization. Third, a true conclusion does not mean that a fallacy is not a fallacy. For example, a hasty generalization could have a true conclusion—the problem lies in the logic, not the truth of the conclusion. For example, if I see one red squirrel in a forest and infer all the squirrels there are red, then I have made a hasty generalization, even if I turn out to be right. The truth of the conclusion does not mean that I was reasoning well. It is like a lucky guess on a math problem: getting the right answer does not mean that I did the math properly. But how does one neutralize the fearful generalization?

On the face of it, a fearful generalization would seem to be easy to neutralize. Just present the argument and consider the size and representativeness of the sample in an objective manner. The problem is that a fearful generalization is motivated by fear and fear impedes rationality and objectivity. Even if a fearful person tries to consider the matter, they might persist in their errors. To use an analogy, I have an irrational fear of flying. While I know that air travel is the safest form of travel this has no effect on my fear. Likewise, someone who is afraid of migrants or men might be able to do the math yet persist in their fearful conclusion. As such, a way of dealing with fearful generalizations would be the best way to deal with fear in general, but this goes beyond the realm of critical thinking and into the realm of virtue.

One way to try to at least briefly defuse the impact of fear is to try the method of substitution. The idea is to replace the group one fears with a group that one belongs too, likes or at least does not fear. This works best when the first premise remains true when the swap is made, otherwise the person can obviously reject the swap. This might have some small impact on the emotional level that will help a person work through the fear—assuming they want to. I will illustrate the process using Chad, a hypothetical Christian white male gun owner who is fearful of undocumented migrants (or illegals, if you prefer).

Imagine that Chad reasons like this:

 

Premise 1: Some migrants have committed violent crimes in America.

“Premise” 2: I (Chad) am afraid of migrants.

Conclusion: Many migrants are violent criminals.

 

As “critical thinking therapy” Chad could try swapping in one of his groups and see if his logic still holds.

 

Premise 1: Some white men have committed violent crimes in America.

“Premise” 2: I (Chad) am a white man.

Conclusion: Many white men are violent criminals.

 

Chad would agree that each argument starts with a true first premise, but Chad would presumably reject the conclusion of the second argument. If pressed on why this is the case, Chad would presumably point out that the statistical data does not support the conclusion. At this point, a rational Chad would realize that the same applies to the first argument as well. If this does not work, one could keep swapping in groups that Chad belongs to or likes until Chad is able to see the bias caused by his fear or one gets exhausted by Chad.

This method is not guaranteed to work (it probably will not), but it does provide a useful method for those who want to check their fears. Self-application involves the same basic process: swapping in your groups or groups you like in place of what you fear to see if your reasoning is good or bad.

As noted in the previous essay, perhaps conservatives have good reasons to not want to be professors or professors have good reasons not to be conservatives. In this essay, I will offer some possible DEI solutions to the dearth of conservatives in higher education.

If highly educated conservatives find academics unattractive because of the lower salaries, then there are two ways to motivate them into becoming professors. One is to argue that capable conservatives should “take one for the team” and become professors. While this would be a financial loss for conservative professors, their sacrifices would benefit the community of conservatives. The challenge is persuading those who see self-interest as a core value to act in a way seemingly contrary to their self-interest.

Another approach, which would probably be more appealing, is for conservatives to offer financial support and rewards for conservatives who become and remain professors. This is already done in some cases, but expanding the support and rewards would help increase the number of conservative professors. One challenge is to ensure that the support and rewards go to actual conservatives. They would need to police ideological purity to keep out clever liberals (or even secret Marxists) who might exploit these opportunities for their own profit. And we would certainly not want anyone profiting from pretending to believe something.

A possible downside to this approach is that these recruited professors could be accused of bias because they are being paid to be conservative professors. will leave a solution to this problem to any conservatives who might be troubled by it.

A practical worry about supporting conservative students so that they become conservative professors is that their experiences in graduate school and as faculty might turn them away from conservatism. For example, they might start taking rhetorical attacks on experts and science personally as they become experts and scientists. As another example, they might find the hostility of Republicans to higher education a problem as they try to work in a field being attacked so vehemently by their fellows. But what about getting professors to want to be conservative? How could this be done?

One option for conservatives is to change their anti-expert and anti-science rhetoric. Rather than engaging in broad attacks on experts or science, they could confine their attacks to specific targets. Those not being directly attacked might find conservatism more appealing. The Republican party could also change its hostile attitude towards higher education towards a more positive approach. They could, for example, return to providing solid funding for research and education. If professors believed that Republicans would act in their interest and in the interest of their students, they would be more inclined to support them. Conservative faculty would probably also be more likely to stay conservative.

Taking such steps would, however, be a problem for the Republican party. After all, the anti-science stance towards climate change and their broad anti-expert stance have yielded great political success. Changing these views would come at a price. Providing support for public higher education would also put Republicans at odds with their views about what should be cut while giving tax breaks for the rich. It would also go against their strategy of monetizing higher education. As such, Republicans would need to weigh the cost of winning over professors against the advantages they gain by the policies that alienate professors.

Oddly enough, some people claim that it is the Democrats and liberals who are more anti-science and anti-intellectual than the Republicans. If this were true, then the Republicans are doing a terrible job of convincing scientists and intellectuals to support them. If they could convince professors that they are the real supporters of the sciences and the Democrats are the real threat, then they should be able to win converts in the academy. The challenge is, of course, proving this claim and getting professors to accept this proof. But this seems unlikely, given that the claim that Republicans are pro-science is absurd on the face of it.

While the culture warriors claim Marxism dominates higher education, a more realistic concern is that higher education is dominated by liberals (or at least Democrats). Conservatives (or at least Republicans) are an underrepresented minority among faculty. This disparity invites inquiry. One reason to investigate, at least for liberals, would be to check for injustice or oppression causing this disparity. Another motivation is intellectual curiosity.

While sorting out this diversity problem might prove daunting, a foundation of theory and methodology has been laid by those studying the domination of higher education by straight, white males. That is, professors like me. These tools should be useful and ironic for looking into the question of why conservatives are not adequate represented in the academy.  But before delving into theories of oppression and unfair exclusion, I must consider that the shortage of conservatives in the ivory towers is a matter of choice. This consideration mirrors a standard explanation for the apparent exclusion of women and minorities for other areas.

One possible explanation is that conservatives have chosen to not become professors. While not always the case, well-educated conservatives tend to be more interested in higher income careers in the private sector. While the pay for full-time faculty is not bad, the pay for adjuncts is terrible. Professor salaries, with some notable exceptions, tend to be lower than non-academic jobs with comparable educational requirements. So, someone interested in maximizing income would not become a professor. Education and effort would yield far more financial reward elsewhere, such as in the medical or financial fields. As such, conservatives are more likely to become bankers rather than philosophers and accountants rather than anthropologists.

A second possible explanation is that people who tend to become professors do not want to be conservatives (or at least Republicans). That is, the qualities that lead a person into a professorial career would tend to lead them away from conservative ideology. While there have been brilliant conservative intellectuals, the Republican party has consistently adopted a strong anti-expert, anti-intellectual stance. This might be due to an anti-intellectual ideology, or because the facts fail to match Republican ideology—such as with climate change. Republicans have also become more hostile to higher education. In contrast, Democrats tend to support higher education.

As becoming a professor generally requires a terminal degree, a professor will spend at least six years in college and graduate school, probably seeing the hostility of Republicans against education and the limited support offered by Democrats. Rational self-interest alone would tend to push professors towards being Democrats, since the Democrats are more likely to support higher education. Those who want to become professors, almost by definition, tend to be intellectual and want to become experts. So, the conservative attacks on experts and intellectuals will tend to drive them away from the Republican party and conservative ideology. Those pursuing careers in the sciences would presumably also find the anti-science stances of the Republicans and conservative ideology unappealing.

While my own case is just an anecdote, one reasons I vote for Democrats is that Democrats are more likely to do things that are in my interest as a professor and in the interest of my students. In contrast, Republicans tend to make my professional life worse by lowering support for education and engaging in micromanagement and ideological impositions. They also make life more difficult for my students. The anti-intellectualism, rejection of truth, and anti-science stances also make the Republican party unappealing to me. As such, it is not surprising that the academy is dominated by liberals: Republicans would usually not want to be professors, and potential professors would tend to not want to be Republicans.

But perhaps there is a social injustice occurring and the lack of diversity is due to the unjust exclusion of conservatives from the academy. It is to this concern that I will address in a future essay. We might need some diversity, equity and inclusion to get conservatives into the academy.

Now that the ethics of methods and sources have been addressed, I now turn to the content of opposition research. The objective is to provide some general guidance about what sort of content is morally acceptable to research and use against political opponents.

Since the objective of opposition research is to find damaging or discrediting information about the target, the desired content will always be negative (or perceived as negative). While there is the view that if one has nothing nice to say about someone else, then one should say nothing, the negative nature of the content does not automatically make such research unethical. To support this, consider the obvious analogy to reporters: the fact that they are on the lookout for negative information does not make them unethical. Finding negative things and reporting on them are legitimate parts of their jobs. Likewise for opposition researchers. As such, concerns about the ethics of the content must involve considerations other than the negative nature of the desired content.

One obvious requirement for ethical content is that the information must be true. This does raise an obvious epistemic problem: how can the researchers know it is true? Laying aside the epistemic problems of skepticism, this is a practical question about evidence, reasoning and credibility which go beyond the scope of this essay. However, a general ethical guide can be provided. At a minimum, the claim should only be used if it is more likely to be true than false. Both ethics and critical thinking also require that the evidence for a claim be proportional to the strength of the claim. As such, strong claims require strong support. Ethics also requires considering the harm that could be done by using the claim and the greater the harm, the greater the evidence for that claim needs to be. This moral guide is at odds with the goal of the research, since the more damaging the claim, the better it is as a political weapon. But ethics requires balancing the political value of the weaponized information against the harm that could be done to an innocent person. This is not to say that damaging information should not be used, but that due caution is required.

This approach is analogous to guides on using force. Justifying the use of lethal force against a person requires good reasons to believe that person is a threat and that the use of force is justified. To the degree that there are doubts, the justification is reduced. Likewise, damaging information should be used with caution so that an innocent person is not unjustly harmed. For example, if someone is accused of having committed sexual assault, then there would need to be strong evidence supporting such a claim. Although in the current political climate, such an accusation seems more of a plus than a disqualification.

There is debate about when the use of force is justified, and the perception of the person using the force (such as how scared or threatened they claimed to be) is often considered. The same applies to the use of damaging information, so there will be considerable disagreement (probably along ideological lines) about whether using it is justified. And there will be debates about how people see its plausibility. Despite these issues, the general guide remains: the evidence needs to be adequate to justify the belief the claim is true. The use of information that does not meet even the minimal standard (more likely to be true than not) would be unethical. In other cases, there can be good faith debate about whether a claim is adequately supported or not. In addition to the concern about the truth of the information, there is also the concern about the relevance of the information.

The general principle of relevance is obvious: the content must be relevant to the issue. In the abstract, relevance is easy to define: information is relevant if it bears on the person’s suitability for the position.  For example, if the opposition research is against someone running for senate, then the content must be relevant to the person’s ability to do the job of a senator properly and effectively. What should be considered relevant will vary from situation to situation.

One problem is that people have different notions of relevance. For example, some might consider the high school and college behavior of a candidate for the Supreme Court to be relevant information while others disagree. As another example, some might consider a candidate’s sexual activity relevant while others might see consensual sex of any kind between adults as irrelevant. And, as the current political climate shows, being credibly accused of sexual assault or embracing long discredited claims about the cause of autism might be seen as positive rather than disqualifying.

One way to solve this problem is to use this principle: whatever would influence voters (if true) is acceptable to use. While this seems to be entailed by the citizen’s right to know, it provides a very broad principle. In fact, it might be so broad as to be useless as a guide. After all, voters can be influenced by almost any fact about a person even when it would seem to have no relevance to the office/position/etc. in question.

That said, there is also the problem that many offices and positions have little in the way of requirements. For example, the office of President has only the age and nationality requirements. Because of this, using the requirements of the position to set the limits of information would be too narrow. What is needed is a guide that is not too narrow and not too broad.

One option would be to go with the established norms for the position For example, while the requirements to be President are minimal, there (used to be) expectations about what the person should be like to be fit for office, such as basic competence, respect for the rule of law, and not being a convicted felon.

The problem with using the norms is that this seems to embrace relativism and allows for a potentially unchecked race to the bottom as norms are broken and replaced. As such, there should be some restrictions on what is ethical content that goes beyond the norms of the day. Developing a full moral guide goes beyond the scope of this essay, but a general guide can be offered. The guiding principle is that the content should be relevant to the position, while also considering what would reasonably be relevant to the voters. But norms, like laws, only hold when people are willing to follow or enforce them.

As with any research, opposition research relies on sources. If the goal is to gather true and relevant information, then the credibility of sources matters. There are the usual logical standards for assessing the credibility of sources. In such cases, the argument from authority provides a good guide. After all, to accept a claim from a source as true because of the source is to engage in the argument from authority.  This argument has the following form:

 

Premise 1: A makes claim C about Subject S.

Premise 2: A is an authority on subject S.

Conclusion: C is true.

 

The argument can also be recast as an argument from credibility, if one prefers that to authority.

 

Premise 1: A makes claim C about Subject S.

Premise 2: A is a credible source on subject S.

Conclusion: C is true.

 

Assessing this reasoning involves assessing the credibility of the source. One factor is bias: the more biased the source, the less credible. Other factors include  having the expertise to make claims on the subject, whether the source is identified or not (anonymous sources cannot be properly assessed for credibility), and whether credible sources also agree that the claim is true. It must be noted that a lack of credibility does not prove a claim is false. Rather, a lack of credibility means that there is no reason to accept the claim based on that source. Because people tend to weigh bias very heavily, it is important to remember that biased sources can still make true claims. Proving bias lowers credibility but does not disprove the claim.

If the goal of opposition research is to get true and relevant information, then only credible sources should be used. While there is the question of how credible a source should be, a minimal standard should be that the source is more likely to be truthful than they are to lie. And, to follow the advice of John Locke, the evidence must be proportional to the strength of the claim. So, for example, the claim that a candidate’s father was involved in the Kennedy assassination would require considerable support. If the goal is simply to win by any means necessary, then moral concerns are irrelevant. What would matter would be pragmatic concerns about the effectiveness of the information. If the credibility of the source matters to the public (which, as Trump has shown, is often not the case), then credible sources should be used. If the target audience does not care about credulity, then it would not matter, and opposition “researchers” could save time by just making things up. One could also advance the usual sort of utilitarian argument that the end of defeating a bad opponent would justify the means, though this would also require considering the harm caused by setting aside concerns about credibility.

In addition to the credibility concerns about sources, there are also moral concerns, especially about which sources are ethical to use. As was the case with methods, the use of publicly accessible sources usually raises no special moral problems. After all, such information is already available, and the opposition research merely collects it so it can be used against the opposition. As with the ethics of methods, the law can provide a useful starting point for ethical considerations about sources. It can also be argued that the use of illegal sources would be unfair to the opposition if they are staying within the law.  Naturally, it should be kept in mind that the law is distinct from morality, so that the legal is not always ethical and the illegal is not always unethical.

One example that helped bring opposition research into the public eye was the Russian efforts to get information to the Trump campaign in 2016. While Trump claims that they had nothing of value, it is illegal for a candidate to receive anything of value from a foreign source.  In addition to the illegality of accepting foreign assistance in a political campaign, there is also the moral argument that outsiders should not be allowed to interfere in our elections, even if they have true and relevant information. After all, the election is the business of the citizens and foreign involvement subverts democracy. But this could be countered by arguing that any true and relevant information should be available to the voters, no matter its origin.

As another example, someone who violates a non-disclosure agreement to provide information would also be an illegal source. From a moral standpoint, the person who signed the NDA would break their agreement and thus act unethically. Naturally, if the NDA was imposed unjustly then breaking it could be morally acceptable. However, using sources that have freely agreed to remain silent would seem to be wrong. But there is the obvious problem that NDAs can be used to hide awful things that would change the minds of voters and hence they have the right to know.

As was the case with methods, one could advance the argument that winning is all that matters, or a utilitarian argument could be used to justify using morally dubious sources. For example, a utilitarian argument could be made for getting a source to break an NDA that forbids them from talking about the settlement they got from being sexually harassed by a senator. After all, this information would be relevant to deciding whether to vote for the senator.

More broadly, it could be argued that the source should not matter if the information is true and relevant. After all, the right of citizens to know true and relevant information could be taken to override ethical concerns about sources. This is something that likely requires assessment on a case-by-case basis. To illustrate, consider the question of whether political campaigns should accept true and relevant information from foreign powers. On the one hand, there is the argument that the information could help prevent harm by reducing the chance that a bad person would be elected or appointed. However, accepting such aid from foreign powers is to invite the subversion of the election process and could create more harm than what is intended. As such, foreign sources of this type would be unethical to use. In the next and final essay, I will consider the ethics of the content of opposition research, which focuses on the matter of relevance.