The question “why lie if the truth would suffice” can be interpreted in at least three ways. One is as an inquiry about the motivation and asks for an explanation. A second is as an inquiry about weighing the advantages and disadvantages of lying. The third way is as a rhetorical question that states, under the guise of inquiry, that one should not lie if the truth would suffice.

Since a general discussion of this question would be rather abstract, I will focus on a specific example and use it as the basis for the discussion. Readers should, of course, construct their own examples using their favorite lie from those they disagree with. I will use Trump’s response to the Democrats’ Green New Deal as my example. While this is something of a flashback to his first term, Trump recently signed an executive order targeting the old Green New Deal.

In 2019 the Democrats proposed a Green New Deal aimed at addressing climate change and economic issues. As with any proposal, rational criticisms can be raised against it. In his first term, Trump claimed the Democrats intend “to permanently eliminate all Planes, Cars, Cows, Oil, Gas & the Military – even if no other country would do the same.”  While there are some Democrats who would do these things, the Democratic Party favors none of that. Looked at rationally, it seems to make no sense to lie about the Green New Deal. If it is bad enough to reject on its own defects, lies would not be needed. If one must lie to attack it, this suggests a lack of arguments against it. To use an analogy, if a prosecutor lies to convict a person, this suggests they have no case—otherwise they would rely on evidence. So, why would Trump lie if the truth would suffice to show the Green New Deal is a terrible plan?

The question of why Trump (or anyone else) lies when the truth would suffice is a matter for psychology, not philosophy. So, I will leave that question to others. This leaves me with the question about the advantages and disadvantages of lying along with the rhetorical question.

The lie about the Green New Deal is a good example of hyperbole and a straw man. Trump himself claims to use the tactic of “truthful hyperbole”. Hyperbole is a rhetorical device in which one makes use of extravagant overstatement, such as claiming that the Democrats plan to eliminate cows. The reason hyperbole is not just called lying is because it is a specific type of untruth and must have a foundation in truth. Hyperbole involves inflating or exaggerating something true rather than a complete fiction. The Green New Deal is aimed at making America carbon neutral and this would impact cars, cows, planes, oil, gas and the military. The extravagant exaggeration is that the proposal would eliminate all of them permanently. This would be as if someone proposed cutting back on dessert at family dinners and they were accused of wanting to eliminate meals permanently. Since hyperbole is rhetoric without logic, it has no logical force and does not prove (or disprove) anything. But it can have considerable psychological force in influencing people to believe a claim.

Hyperbole is often used in conjunction with the Straw Man fallacy. This fallacy is committed when a person’s actual position is ignored and a distorted, exaggerated or misrepresented version of that position is criticized in its place. This sort of “reasoning” has the following pattern:

 

Premise 1: Person A has position X.

Premise 2: Person B presents position Y (a distorted version of X).

Premise 3: Person B attacks position Y.

Conclusion: Therefore, X is false or bad.

 

This sort of “reasoning” is fallacious because attacking a distorted version of a position is not a criticism of the actual position. One might as well expect an attack on a poor drawing of a person to hurt the person.

Like hyperbole, the Straw Man fallacy is not based on a simple lie: it involves an exaggeration or distortion of something true. In the case of Trump and the Green New Deal, his “reasoning” is that the Green New Deal should be rejected because his hyperbolic straw man version of it is terrible. Since this is a fallacy, his “reasons” do not support his claim. It is, as always, important to note that Trump could be right about the Green New Deal being a bad idea, but not for the “reasons” he gives. To infer that a fallacy must have a false conclusion is itself a fallacy (the fallacy fallacy).

While hyperbole has no logical force and a straw man is a fallacy, there are advantages to using them. One advantage is that they are much easier than coming up with good reasons. Criticizing the Green New Deal for what it is requires knowing what it is and considering possible defects which take time and effort. Tweeting out a straw man takes seconds.

The second advantage is that hyperbole and straw men often work, often much better than the truth. In the case of complex matters, people rarely do their homework and do not know that a straw man is a straw man. I have interacted with people who honestly think Democrats plan to eliminate planes and cars. Since this is a bad idea, they reject it, not realizing that is not the Green New Deal. An obvious defense against hyperbole and straw man is to know the truth. While this can take time and effort, someone who has the time to post on Facebook or Twitter, has the time to do basic fact checking. If not, their ignorance should command them to remain silent, though they have the right to express their unsupported views.

As far as working better than the truth, hyperbole or straw man appeals to the target’s fears, anger or hope. They are thus motivated to believe in ways that truth cannot match. People generally find rational argumentation dull and unmoving, especially about complicated issues.  If Trump honestly presented real problems with the Green New Deal, complete with supporting data and graphs, he would bore most and lose his audience. By using a straw man, he better achieves his goal. This does allow for a pragmatic argument for lying because the truth will not suffice.

If telling the truth would not suffice to convince people, then there is the pragmatic argument that if lying would do the job, then it should be used. For example, if going into an honest assessment of the  Green New Deal would bore people and lying would get the job done, then Trump should lie if he wants to achieve his goal. This does, however, raise  moral concerns.

If the reason the truth would not suffice is because it does not logically support the claim, then it would be immoral to lie. To use a non-political example, if you would not invest in my new fake company iScam if you knew it was a scam, getting you to invest in it by lying would be wrong. So, if the New Green Deal would not be refuted by the truth, Trump’s lies about it would be immoral.

But, what about cases in which the truth would logically support a claim, but the truth would not persuade people to accept that claim? Going back to the Green New Deal example, suppose it is terrible but explaining its defects would bore people and they would remain unpersuaded. But a straw man version of the Green New Deal would persuade many people to reject this hypothetically terrible plan? From a utilitarian standpoint, the lie could be morally justified; if the good of lying outweighed the harm, then it would be the right thing to do. To use an analogy, suppose you were trying to convince a friend to not start a dangerous diet. You have scientific data and good arguments, but you know your friend is bored by data and is largely immune to logic. So, telling them the truth would mean that they would go on the diet and harm themself. But, if you exaggerate the harm dramatically, your friend will abandon the diet. In such a case, the straw man argument would seem to be morally justified as you are using it to protect your friend.

While this might seem to justify the general use of hyperbole and the straw man, it only justifies their use when the truth does suffice logically but does not suffice in terms of persuasive power. That is, the fallacy is only justified as a persuasive device when there are non-fallacious arguments that would properly establish the same conclusion.

Reasoning is like chainsaw: useful when used properly, but when used badly it can create a bloody mess. While this analogy can be applied broadly to logic, this essay focuses on the inductive generalization and how it can become a wayward chainsaw under the influence of fear. I’ll begin by looking at our good friend the inductive generalization.

Consisting of a premise and a conclusion, the inductive generalization is a simple argument:

 

Premise 1: P% of observed Xs are Ys.

Conclusion: P% of all Xs are Ys.

 

The quality of an inductive generalization depends on the quality of the first premise, which is usually called the sample. The larger and more representative the sample, the stronger the argument (the more likely it is that the conclusion will be true if the premise is true). There are two main ways in which an inductive generalization can be flawed. The first is when the sample is too small to adequately support the conclusion. For example, a person might have a run-in with a single bad driver from Ohio and conclude all Ohio drivers are terrible. This is known as the fallacy of hasty generalization.

The second is when there is a biased sample, one that does not represent the target population. For example, concluding that most people are Christians because everyone at a Christian church is a Christian would be a fallacy. This is known as the fallacy of biased generalization.

While these two fallacies are well known, it is worth considering them in the context of fear: the fearful generalization. On the one hand, it is not new: a fearful generalization is a hasty generalization or a biased generalization. On the other hand, the hallmark of a fearful generalization (that is fueled by fear) makes it worth considering, especially since addressing the fueling fear seems to be key to disarming this sort of poor reasoning.

While a fearful generalization is not a new fallacy structurally, it is committed because of the psychological impact of fear. In the case of a hasty fearful generalization, the error is drawing an inference from a sample that is too small, due to fear. For example, a female college student who hears about incidents of sexual harassment on campuses might, from fear, infer that most male students are likely to harass her. As another example, a person who hears about an undocumented migrant who commits a murder might, from fear, infer that many  undocumented migrants are murderers. Psychologically (rather than logically), fear fills out the sample, making it feel like the conclusion is true and adequately supported. However, this is an error in reasoning.

The biased fearful generalization occurs when the inference is based on a sample that is not representative, but this is overlooked due to fear. Psychologically, fear makes the sample feel representative enough to support the conclusion. For example, a person might look at arrest data about migrants and infer that most migrants are guilty of crimes. A strong generalization about what percentage of migrants commits crimes needs to include the entire population, not a sample consisting just of those arrested.

As another example, if someone terrified of guns looks at crime data about arrests involving firearms and infers that most gun owners are criminals, this would be a biased generalization. This is because those arrested for gun crimes do not represent the entire gun-owning population. A good generalization about what percentage of gun-owners commit crimes needs to include the general population, not just those arrested.

When considering any fallacy, there are three things to keep in mind. First, not everything that looks like a fallacy is a fallacy. After all, a good generalization has the same structure as a hasty or biased generalization. Second, concluding a fallacy must have a false conclusion is a fallacy (the fallacy fallacy). So, a biased or hasty generalization could have a true conclusion; but it would not be supported by the generalization. Third, a true conclusion does not mean that a fallacy is not a fallacy. For example, a hasty generalization could have a true conclusion—the problem lies in the logic, not the truth of the conclusion. For example, if I see one red squirrel in a forest and infer all the squirrels there are red, then I have made a hasty generalization, even if I turn out to be right. The truth of the conclusion does not mean that I was reasoning well. It is like a lucky guess on a math problem: getting the right answer does not mean that I did the math properly. But how does one neutralize the fearful generalization?

On the face of it, a fearful generalization would seem to be easy to neutralize. Just present the argument and consider the size and representativeness of the sample in an objective manner. The problem is that a fearful generalization is motivated by fear and fear impedes rationality and objectivity. Even if a fearful person tries to consider the matter, they might persist in their errors. To use an analogy, I have an irrational fear of flying. While I know that air travel is the safest form of travel this has no effect on my fear. Likewise, someone who is afraid of migrants or men might be able to do the math yet persist in their fearful conclusion. As such, a way of dealing with fearful generalizations would be the best way to deal with fear in general, but this goes beyond the realm of critical thinking and into the realm of virtue.

One way to try to at least briefly defuse the impact of fear is to try the method of substitution. The idea is to replace the group one fears with a group that one belongs too, likes or at least does not fear. This works best when the first premise remains true when the swap is made, otherwise the person can obviously reject the swap. This might have some small impact on the emotional level that will help a person work through the fear—assuming they want to. I will illustrate the process using Chad, a hypothetical Christian white male gun owner who is fearful of undocumented migrants (or illegals, if you prefer).

Imagine that Chad reasons like this:

 

Premise 1: Some migrants have committed violent crimes in America.

“Premise” 2: I (Chad) am afraid of migrants.

Conclusion: Many migrants are violent criminals.

 

As “critical thinking therapy” Chad could try swapping in one of his groups and see if his logic still holds.

 

Premise 1: Some white men have committed violent crimes in America.

“Premise” 2: I (Chad) am a white man.

Conclusion: Many white men are violent criminals.

 

Chad would agree that each argument starts with a true first premise, but Chad would presumably reject the conclusion of the second argument. If pressed on why this is the case, Chad would presumably point out that the statistical data does not support the conclusion. At this point, a rational Chad would realize that the same applies to the first argument as well. If this does not work, one could keep swapping in groups that Chad belongs to or likes until Chad is able to see the bias caused by his fear or one gets exhausted by Chad.

This method is not guaranteed to work (it probably will not), but it does provide a useful method for those who want to check their fears. Self-application involves the same basic process: swapping in your groups or groups you like in place of what you fear to see if your reasoning is good or bad.

As noted in the previous essay, perhaps conservatives have good reasons to not want to be professors or professors have good reasons not to be conservatives. In this essay, I will offer some possible DEI solutions to the dearth of conservatives in higher education.

If highly educated conservatives find academics unattractive because of the lower salaries, then there are two ways to motivate them into becoming professors. One is to argue that capable conservatives should “take one for the team” and become professors. While this would be a financial loss for conservative professors, their sacrifices would benefit the community of conservatives. The challenge is persuading those who see self-interest as a core value to act in a way seemingly contrary to their self-interest.

Another approach, which would probably be more appealing, is for conservatives to offer financial support and rewards for conservatives who become and remain professors. This is already done in some cases, but expanding the support and rewards would help increase the number of conservative professors. One challenge is to ensure that the support and rewards go to actual conservatives. They would need to police ideological purity to keep out clever liberals (or even secret Marxists) who might exploit these opportunities for their own profit. And we would certainly not want anyone profiting from pretending to believe something.

A possible downside to this approach is that these recruited professors could be accused of bias because they are being paid to be conservative professors. will leave a solution to this problem to any conservatives who might be troubled by it.

A practical worry about supporting conservative students so that they become conservative professors is that their experiences in graduate school and as faculty might turn them away from conservatism. For example, they might start taking rhetorical attacks on experts and science personally as they become experts and scientists. As another example, they might find the hostility of Republicans to higher education a problem as they try to work in a field being attacked so vehemently by their fellows. But what about getting professors to want to be conservative? How could this be done?

One option for conservatives is to change their anti-expert and anti-science rhetoric. Rather than engaging in broad attacks on experts or science, they could confine their attacks to specific targets. Those not being directly attacked might find conservatism more appealing. The Republican party could also change its hostile attitude towards higher education towards a more positive approach. They could, for example, return to providing solid funding for research and education. If professors believed that Republicans would act in their interest and in the interest of their students, they would be more inclined to support them. Conservative faculty would probably also be more likely to stay conservative.

Taking such steps would, however, be a problem for the Republican party. After all, the anti-science stance towards climate change and their broad anti-expert stance have yielded great political success. Changing these views would come at a price. Providing support for public higher education would also put Republicans at odds with their views about what should be cut while giving tax breaks for the rich. It would also go against their strategy of monetizing higher education. As such, Republicans would need to weigh the cost of winning over professors against the advantages they gain by the policies that alienate professors.

Oddly enough, some people claim that it is the Democrats and liberals who are more anti-science and anti-intellectual than the Republicans. If this were true, then the Republicans are doing a terrible job of convincing scientists and intellectuals to support them. If they could convince professors that they are the real supporters of the sciences and the Democrats are the real threat, then they should be able to win converts in the academy. The challenge is, of course, proving this claim and getting professors to accept this proof. But this seems unlikely, given that the claim that Republicans are pro-science is absurd on the face of it.

While the culture warriors claim Marxism dominates higher education, a more realistic concern is that higher education is dominated by liberals (or at least Democrats). Conservatives (or at least Republicans) are an underrepresented minority among faculty. This disparity invites inquiry. One reason to investigate, at least for liberals, would be to check for injustice or oppression causing this disparity. Another motivation is intellectual curiosity.

While sorting out this diversity problem might prove daunting, a foundation of theory and methodology has been laid by those studying the domination of higher education by straight, white males. That is, professors like me. These tools should be useful and ironic for looking into the question of why conservatives are not adequate represented in the academy.  But before delving into theories of oppression and unfair exclusion, I must consider that the shortage of conservatives in the ivory towers is a matter of choice. This consideration mirrors a standard explanation for the apparent exclusion of women and minorities for other areas.

One possible explanation is that conservatives have chosen to not become professors. While not always the case, well-educated conservatives tend to be more interested in higher income careers in the private sector. While the pay for full-time faculty is not bad, the pay for adjuncts is terrible. Professor salaries, with some notable exceptions, tend to be lower than non-academic jobs with comparable educational requirements. So, someone interested in maximizing income would not become a professor. Education and effort would yield far more financial reward elsewhere, such as in the medical or financial fields. As such, conservatives are more likely to become bankers rather than philosophers and accountants rather than anthropologists.

A second possible explanation is that people who tend to become professors do not want to be conservatives (or at least Republicans). That is, the qualities that lead a person into a professorial career would tend to lead them away from conservative ideology. While there have been brilliant conservative intellectuals, the Republican party has consistently adopted a strong anti-expert, anti-intellectual stance. This might be due to an anti-intellectual ideology, or because the facts fail to match Republican ideology—such as with climate change. Republicans have also become more hostile to higher education. In contrast, Democrats tend to support higher education.

As becoming a professor generally requires a terminal degree, a professor will spend at least six years in college and graduate school, probably seeing the hostility of Republicans against education and the limited support offered by Democrats. Rational self-interest alone would tend to push professors towards being Democrats, since the Democrats are more likely to support higher education. Those who want to become professors, almost by definition, tend to be intellectual and want to become experts. So, the conservative attacks on experts and intellectuals will tend to drive them away from the Republican party and conservative ideology. Those pursuing careers in the sciences would presumably also find the anti-science stances of the Republicans and conservative ideology unappealing.

While my own case is just an anecdote, one reasons I vote for Democrats is that Democrats are more likely to do things that are in my interest as a professor and in the interest of my students. In contrast, Republicans tend to make my professional life worse by lowering support for education and engaging in micromanagement and ideological impositions. They also make life more difficult for my students. The anti-intellectualism, rejection of truth, and anti-science stances also make the Republican party unappealing to me. As such, it is not surprising that the academy is dominated by liberals: Republicans would usually not want to be professors, and potential professors would tend to not want to be Republicans.

But perhaps there is a social injustice occurring and the lack of diversity is due to the unjust exclusion of conservatives from the academy. It is to this concern that I will address in a future essay. We might need some diversity, equity and inclusion to get conservatives into the academy.

Now that the ethics of methods and sources have been addressed, I now turn to the content of opposition research. The objective is to provide some general guidance about what sort of content is morally acceptable to research and use against political opponents.

Since the objective of opposition research is to find damaging or discrediting information about the target, the desired content will always be negative (or perceived as negative). While there is the view that if one has nothing nice to say about someone else, then one should say nothing, the negative nature of the content does not automatically make such research unethical. To support this, consider the obvious analogy to reporters: the fact that they are on the lookout for negative information does not make them unethical. Finding negative things and reporting on them are legitimate parts of their jobs. Likewise for opposition researchers. As such, concerns about the ethics of the content must involve considerations other than the negative nature of the desired content.

One obvious requirement for ethical content is that the information must be true. This does raise an obvious epistemic problem: how can the researchers know it is true? Laying aside the epistemic problems of skepticism, this is a practical question about evidence, reasoning and credibility which go beyond the scope of this essay. However, a general ethical guide can be provided. At a minimum, the claim should only be used if it is more likely to be true than false. Both ethics and critical thinking also require that the evidence for a claim be proportional to the strength of the claim. As such, strong claims require strong support. Ethics also requires considering the harm that could be done by using the claim and the greater the harm, the greater the evidence for that claim needs to be. This moral guide is at odds with the goal of the research, since the more damaging the claim, the better it is as a political weapon. But ethics requires balancing the political value of the weaponized information against the harm that could be done to an innocent person. This is not to say that damaging information should not be used, but that due caution is required.

This approach is analogous to guides on using force. Justifying the use of lethal force against a person requires good reasons to believe that person is a threat and that the use of force is justified. To the degree that there are doubts, the justification is reduced. Likewise, damaging information should be used with caution so that an innocent person is not unjustly harmed. For example, if someone is accused of having committed sexual assault, then there would need to be strong evidence supporting such a claim. Although in the current political climate, such an accusation seems more of a plus than a disqualification.

There is debate about when the use of force is justified, and the perception of the person using the force (such as how scared or threatened they claimed to be) is often considered. The same applies to the use of damaging information, so there will be considerable disagreement (probably along ideological lines) about whether using it is justified. And there will be debates about how people see its plausibility. Despite these issues, the general guide remains: the evidence needs to be adequate to justify the belief the claim is true. The use of information that does not meet even the minimal standard (more likely to be true than not) would be unethical. In other cases, there can be good faith debate about whether a claim is adequately supported or not. In addition to the concern about the truth of the information, there is also the concern about the relevance of the information.

The general principle of relevance is obvious: the content must be relevant to the issue. In the abstract, relevance is easy to define: information is relevant if it bears on the person’s suitability for the position.  For example, if the opposition research is against someone running for senate, then the content must be relevant to the person’s ability to do the job of a senator properly and effectively. What should be considered relevant will vary from situation to situation.

One problem is that people have different notions of relevance. For example, some might consider the high school and college behavior of a candidate for the Supreme Court to be relevant information while others disagree. As another example, some might consider a candidate’s sexual activity relevant while others might see consensual sex of any kind between adults as irrelevant. And, as the current political climate shows, being credibly accused of sexual assault or embracing long discredited claims about the cause of autism might be seen as positive rather than disqualifying.

One way to solve this problem is to use this principle: whatever would influence voters (if true) is acceptable to use. While this seems to be entailed by the citizen’s right to know, it provides a very broad principle. In fact, it might be so broad as to be useless as a guide. After all, voters can be influenced by almost any fact about a person even when it would seem to have no relevance to the office/position/etc. in question.

That said, there is also the problem that many offices and positions have little in the way of requirements. For example, the office of President has only the age and nationality requirements. Because of this, using the requirements of the position to set the limits of information would be too narrow. What is needed is a guide that is not too narrow and not too broad.

One option would be to go with the established norms for the position For example, while the requirements to be President are minimal, there (used to be) expectations about what the person should be like to be fit for office, such as basic competence, respect for the rule of law, and not being a convicted felon.

The problem with using the norms is that this seems to embrace relativism and allows for a potentially unchecked race to the bottom as norms are broken and replaced. As such, there should be some restrictions on what is ethical content that goes beyond the norms of the day. Developing a full moral guide goes beyond the scope of this essay, but a general guide can be offered. The guiding principle is that the content should be relevant to the position, while also considering what would reasonably be relevant to the voters. But norms, like laws, only hold when people are willing to follow or enforce them.

As with any research, opposition research relies on sources. If the goal is to gather true and relevant information, then the credibility of sources matters. There are the usual logical standards for assessing the credibility of sources. In such cases, the argument from authority provides a good guide. After all, to accept a claim from a source as true because of the source is to engage in the argument from authority.  This argument has the following form:

 

Premise 1: A makes claim C about Subject S.

Premise 2: A is an authority on subject S.

Conclusion: C is true.

 

The argument can also be recast as an argument from credibility, if one prefers that to authority.

 

Premise 1: A makes claim C about Subject S.

Premise 2: A is a credible source on subject S.

Conclusion: C is true.

 

Assessing this reasoning involves assessing the credibility of the source. One factor is bias: the more biased the source, the less credible. Other factors include  having the expertise to make claims on the subject, whether the source is identified or not (anonymous sources cannot be properly assessed for credibility), and whether credible sources also agree that the claim is true. It must be noted that a lack of credibility does not prove a claim is false. Rather, a lack of credibility means that there is no reason to accept the claim based on that source. Because people tend to weigh bias very heavily, it is important to remember that biased sources can still make true claims. Proving bias lowers credibility but does not disprove the claim.

If the goal of opposition research is to get true and relevant information, then only credible sources should be used. While there is the question of how credible a source should be, a minimal standard should be that the source is more likely to be truthful than they are to lie. And, to follow the advice of John Locke, the evidence must be proportional to the strength of the claim. So, for example, the claim that a candidate’s father was involved in the Kennedy assassination would require considerable support. If the goal is simply to win by any means necessary, then moral concerns are irrelevant. What would matter would be pragmatic concerns about the effectiveness of the information. If the credibility of the source matters to the public (which, as Trump has shown, is often not the case), then credible sources should be used. If the target audience does not care about credulity, then it would not matter, and opposition “researchers” could save time by just making things up. One could also advance the usual sort of utilitarian argument that the end of defeating a bad opponent would justify the means, though this would also require considering the harm caused by setting aside concerns about credibility.

In addition to the credibility concerns about sources, there are also moral concerns, especially about which sources are ethical to use. As was the case with methods, the use of publicly accessible sources usually raises no special moral problems. After all, such information is already available, and the opposition research merely collects it so it can be used against the opposition. As with the ethics of methods, the law can provide a useful starting point for ethical considerations about sources. It can also be argued that the use of illegal sources would be unfair to the opposition if they are staying within the law.  Naturally, it should be kept in mind that the law is distinct from morality, so that the legal is not always ethical and the illegal is not always unethical.

One example that helped bring opposition research into the public eye was the Russian efforts to get information to the Trump campaign in 2016. While Trump claims that they had nothing of value, it is illegal for a candidate to receive anything of value from a foreign source.  In addition to the illegality of accepting foreign assistance in a political campaign, there is also the moral argument that outsiders should not be allowed to interfere in our elections, even if they have true and relevant information. After all, the election is the business of the citizens and foreign involvement subverts democracy. But this could be countered by arguing that any true and relevant information should be available to the voters, no matter its origin.

As another example, someone who violates a non-disclosure agreement to provide information would also be an illegal source. From a moral standpoint, the person who signed the NDA would break their agreement and thus act unethically. Naturally, if the NDA was imposed unjustly then breaking it could be morally acceptable. However, using sources that have freely agreed to remain silent would seem to be wrong. But there is the obvious problem that NDAs can be used to hide awful things that would change the minds of voters and hence they have the right to know.

As was the case with methods, one could advance the argument that winning is all that matters, or a utilitarian argument could be used to justify using morally dubious sources. For example, a utilitarian argument could be made for getting a source to break an NDA that forbids them from talking about the settlement they got from being sexually harassed by a senator. After all, this information would be relevant to deciding whether to vote for the senator.

More broadly, it could be argued that the source should not matter if the information is true and relevant. After all, the right of citizens to know true and relevant information could be taken to override ethical concerns about sources. This is something that likely requires assessment on a case-by-case basis. To illustrate, consider the question of whether political campaigns should accept true and relevant information from foreign powers. On the one hand, there is the argument that the information could help prevent harm by reducing the chance that a bad person would be elected or appointed. However, accepting such aid from foreign powers is to invite the subversion of the election process and could create more harm than what is intended. As such, foreign sources of this type would be unethical to use. In the next and final essay, I will consider the ethics of the content of opposition research, which focuses on the matter of relevance.

To start with the obvious, ethical methods are acceptable, while unethical methods are not. The challenge is developing principles to distinguish between the two. As there are too many possible methods to address, I will focus on commonly used methods.

One ethical method is gathering information using publicly available methods. information One obvious example is acquiring publicly available voting records for politicians. Other examples include gathering information through requests for public documents, searching through public sites such as YouTube, Facebook and Twitter.  As a final example, interviewing people who agree to be interviewed would, in general, be ethical. While there is an overlap between methods and sources, there are important distinctions that require considering them separately. Methods are how you get information. Sources are where it comes from. While methods are used on sources, there can be a difference in their ethics. An ethical method might be used to get information from a morally problematic source, or an unethical method might be used to gather information from a morally unproblematic source.

Research methods become potentially more morally controversial the further one strays from publicly available methods. To use an analogy, looking at what a person is doing in public is (generally) not morally problematic. Peeping into their house from the sidewalk raises some moral concerns and hiding cameras in their bedroom is clearly wrong. As the analogy suggests, the methods become more morally problematic when they involve breaching the wall of privacy. While it might be tempting to regard all such methods as immoral, it will be argued that this is not the case: there are morally acceptable methods that breach this wall. To use an analogy, reporters engaged in legitimate reporting can justly break the walls of privacy.

In some cases, the desired information is not accessible by publicly available means, but the methods are still morally acceptable. For example, it is acceptable to privately interview sources who willingly talk to the researchers but would be unwilling to be interviewed in public view on, for example, the news.

In other cases, the methods used to breach the wall of privacy would be morally unacceptable. Likely examples include hacking, bribery, theft, and using intimidation or deceit. While these examples provide some limited guidance, what is needed is a more general principle. It is natural enough to seek guidance from the law.

While legality is not the same as morality, the use of illegal methods such as hacking, theft, threat and bribery and so on are morally problematic. In many cases of illegal methods, such as theft and hacking, there are independent moral arguments that establish such actions as wrong (over and above their illegality). It is, however, possible for morally acceptable methods of information gathering to be against the law. For example, a repressive state (such as Florida) might pass laws to shield the activities of politicians from the public. As other examples,  there are laws that hide the identity of campaign contributors or impose draconian non-disclosure agreements. As such, the law is not a perfect guide to morality but does provide a useful starting point.

Because most information is now digital, one method of concern is hacking (broadly construed). For the sake of simplicity, this can also be taken to include such things as phishing and other methods that are not, strictly speaking, hacking. This method includes various means of gaining access to digital information without the permission of the owner.

The various methods that breach the wall of privacy could be morally justified on utilitarian grounds. To illustrate, if a candidate had child pornography on his laptop, then it could be argued that hacking into his laptop would be morally justified because doing so could help keep a pedophile out of power. But this could be countered by arguing that this should be left up to law enforcement. The counter to that counter is that law enforcement can be selective about which criminals they decide to pursue.

Another approach is arguing that the citizens’ right to know justifies the use of means that would otherwise seem unethical. To use an analogy, a person’s privacy rights do not (in general) permit them to hide their crimes from the police. There can, of course, be clear exceptions in cases involving tyrannical laws or oppressive policing. Likewise, a political candidate (broadly defined) does not have a right to privacy when it comes to their misdeeds that voters have a right to know. For example, it could be argued that opposition researchers would be acting ethically by stealing documents from a corrupt politician that prove their corruption.

The obvious counter to such reasoning is that opposition researchers are not law enforcement (or moral enforcement). They are members of the public and lack any special moral right to use such methods. If they suspect that something bad is occurring, they should refer the matter to the appropriate authorities. The danger of citizens taking such research into their own hands is illustrated by the case of a concerned citizen who decided to investigate rumors that Hillary Clinton and other Democrats were operating a slavery ring in the basement of a pizza shop. During this “investigation” no one was hurt, but the “investigator” fired shots. As such, if something is bad enough to seem to justify using morally problematic methods, then the matter should be referred to the police (assuming they are not corrupt) and, where appropriate, to the press. But, once again, there can be situations where the authorities are unwilling to do anything even when crimes have been committed.

In the next essay, I will look at the ethics of sources.

Opposition research is gathering information intended to damage or discredit political adversaries. While the intent to find damaging or discrediting information might seem morally problematic, it can be neutral or even laudable. If the intent is to damage adversaries for political advantage, then this is not laudable but could still be ethical. After all, good might come from using opposition research to harm a bad opponent.

 The intent is to provide citizens with relevant and true information so they can make informed decisions, is morally laudable. This information allows for better decision making and can produce better results than making decisions with false or irrelevant information.

 While motives are relevant to assessing ethics, the morality of the motives is distinct from the morality of the research and its results.  This is because bad people with bad motives might do ethical research (for whatever reason) and end up doing good. For example, a selfish and corrupt politician might expose a worse villain. As would be expected, good people with good motives might engage in morally questionable research or end up causing harm, all from the best of intentions. For example, a researcher might use a questionable source and justify this by telling themselves that their good end justifies this means. As a final point about researchers, their ethics are irrelevant to the truth of the information they gather. To think otherwise, would be to fall into an ad hominem or genetic fallacy. In general terms, this is when an irrelevant negative assertion about a source is taken as evidence against their claim(s).  This is distinct from considering the ethics of the researchers when assessing their credibility. After all, bias reduces credibility and is relevant when assessing their likely honesty. Now to the ethics of research.

For this essay and those that follow in this series, it will be assumed that there are at least some moral limits to opposition research. Without this assumption, writing about the ethics of opposition would be limited to “anything goes.” One could refute this assumption by employing the approach of sophists both ancient and modern. The ancient sophists argued in favor of skepticism, relativism and the view that all that matters is success (or winning, if one prefers). On this view, there would be no moral limits on opposition research for two reasons. One is that skepticism and relativism about ethics results in the rejection of the idea of objective ethics. The other is that if success is all that matters, then there are no limits on the means that can be used to achieve it. What matters to the sophist, in terms of opposition research, is acquiring (or fabricating) information that can damage a political adversary and thus increase the chances of success.

In terms of arguments in favor of their being moral limits, one excellent place to start is by considering the consequences of having limits versus not having them. As noted above, good political decisions, such as deciding how to vote, require that citizens have relevant, true information. Opposition research that provides or aims at providing relevant and true information would enable citizens to make better decisions and (probably) produce better results. In contrast, taking the view that all that matters is victory will tend to produce worse results for the general good. There can be exceptions: a well-informed public might make terrible choices, and an utterly selfish person solely focused on their gain might end up somehow doing good. As would be expected, the general debate over whether there should be ethical limits on anything goes far beyond the possible scope of this short essay.

In the essays that follow, I will also make a case for there being ethical limits on opposition research. The gist of this argument is that if the essays are logically appealing, then that provides a reason to accept that there should be at least some limits on opposition research.  The assessment of the ethics of the research involves considering three key factors: the methods used, the sources and the content. There will be an essay on each.

 

Shortly after the #metoo movement began gaining nationwide attention, a female student arrived at my office and started to close the door as she introduced herself. While admitting this is embarrassing, I felt a shiver of fear. In an instant, my mind went through a nightmare scenario: what if she is failing and is planning on using the threat of an accusation of sexual harassment to get a passing grade? Quieting this irrational worry, I casually said “Oh, you can leave the door open.” She sat down and we talked about her paper. In a bit of reflection, I realized that this was a reversal: it is usually the woman who feels the shiver of fear when a man is closing the office door.

To head off any criticisms about inconsistency, I’ve always had a literal open-door policy for all students. This originated in my grad school days when a female friend told me that when a male professor closes his office door on her, she feels trapped and vulnerable. As various cases indicate, her fear was not unfounded. Now that I have my own office, I always keep the door open. As such, it was ironic that I would be one scared by the closing of my office door by a woman.

Like everyone else, I have fears. An important question about a fear is whether it is rational. To illustrate, I will use my fear of heights. Part of this fear is rational: I suffered a full quadriceps tear when a ladder went out from under me. So, being wary about ladders, roofs and the edges of tall things like mountains is sensible. However, my fear also extends to flying. This fear, I know, is irrational. While accidents do occur, being inside a commercial airliner is one of the safest places a normal person can be. I have never been in airplane crash or mishap, so there is not even an instigating incident to explain this fear.

While I have been told and have told myself that flying is nothing to fear, this does not work. Statistics and proof do not change how I feel. I deal with it using Aristotle’s method: I make myself face my fear over and over until I can function normally—despite being terrified. Because of my fear of flying, I do not dismiss other peoples’ fears, even when they might seem unfounded or even silly. As such, when men claim to be terrified of false accusations of sexual assault, I do not dismiss this fear. This is, I am obligated to say, a fear I have felt.

As with any fear, an important question is whether the fear of a false accusation is rational. Is it like the sensible fear that leads me to be careful on roofs or is it like the irrational fear of flying that causes me needless discomfort? As with any fear, this cannot be judged by the strength of the feeling—this gives no indication of the likelihood of a bad thing happening. To illustrate, most people are not terrified of the health complications from a poor diet and lack of exercise but are afraid of shark attacks. But poor health habits are much more likely to kill a person than a shark attack. Sorting out the rationality of fear is a matter of statistics, although the specific context does matter. For example, if I jump into shark infested water while covered in blood, my odds of being attacked would be higher than usual. As another example, a person surrounded by women who are scheming, unethical liars would have greater odds of being falsely accused of assault.

While it is challenging to have accurate data about false accusations, the best available data shows that between 2% and 10% of accusations are shown to be false. The FBI claims that 8% of rape accusations are found to be false.  In contrast, unreported cases of assault (which, one must admit, are hard to quantify) are much higher than the number of false accusations. The best evidence suggests that only 35% of sexual assaults are reported. As such, an assault is unlikely to be reported and the odds of a false accusation are extremely low.

But one might insist that false accusations do happen. This is true, but the data shows the typical false allegation is made by a teenage girl trying to get out of trouble. So, the notion that women use false accusations to destroy men is not well supported. This is not to say that this is impossible, just that it is extremely unlikely. Going back to my fear of flying, the fear is not irrational because a crash could never happen. Rather, it is irrational because the fear is disproportional to the likelihood of a crash. So, the terror we men feel about being falsely accused of sexual assault is like my fear of flying: it is not a fear of the impossible, but a fear of the extremely unlikely.

There are, however, people who do have a reasonable fear of being wrongfully accused and convicted. These are black people (and other minorities). Many of those who are vocal about their fear of men being falsely accused of sexual assault have little or no concern about the wrongful accusation and conviction of minorities and express faith in that aspect of the legal system. This is an inconsistent view: if false accusations leading to harm are awful and something to worry about, then the false accusations against minorities should be seen this way. One might suspect that the worry does not stem from a passion for justice, but fear of accountability.

The concept of tribalism is often used to explain American politics but is also wielded as a weapon. An expert might claim that tribalism is causing unwillingness to compromise, while a partisan might deride the tribalism of the other tribe. While this essay is not intended to explore the complexities of a rigorous definition of the concept, I will endeavor to discuss the matter in a neutral and rational way.

Tribalism is characterized by loyalty to the tribe. This differs from loyalty to principles or values. After all, a person who is loyal to a tribe because it is their tribe will remain loyal even when the values of the tribe change. In contrast, a person dedicated to values that a tribe just also happens to have at a certain time will leave that tribe if these values are abandoned. American tribalism involves value fluidity: as the tribe changes values, tribalists shift their values. For example, Republicans once endorsed free trade and opposed tariffs. They also professed to dislike deficits and spending. Trump, however, shifted these values and now the Republican tribe embraces tariffs, deficits and big government spending. Such is the power of tribalism that it trumps professed values.

It might be contended that tribes need values and principles to define them, hence this claim of fluidity is an exaggeration. However, the ease with which tribes shift values shows that it is real. People even develop myths that the values they profess now have always been the values of their tribe.

Tribalism has its origin in biology as humans are social animals and tribalism is the human equivalent of pack loyalty. Animals generally lack abstract principles or values, and this is one reason why tribalism trumps values—it is grounded in unthinking instinct. Tribalism is also fueled by cognitive biases. The most important is in-group bias, which is the tendency of people to see members of their own group as better than the members of other groups. This bias makes it easy for people to attribute positive qualities to members of their own tribe while easily assigning negative traits to those of other tribes. This probably also helps support value fluidity: whatever changes occur in the values professed by the tribe will still be seen as better than the values of other tribes. As might be expected, fallacious reasoning also plays a role in tribalism.

There is a fallacy, often called the “group think fallacy”, in which it is inferred that a claim is true (or something is good) because members of one’s group believe the claim (or hold to the values). This is obviously fallacious but has considerable psychological appeal. This also helps fuel value fluidity, since beliefs and values are not based on objective assessment, but by reference to the group. As would be expected, tribalism creates numerous problems.

One problem is that tribalism makes the professed values of the tribe meaningless. This is because loyalty is to the tribe rather than to the professed values. This does raise some interesting philosophical questions about the basis of tribal identity. It also creates a ship of Theseus style problem about whether there is a point at which a tribe has changed its professed values so much that it is no longer the same tribe. There are also some other interesting metaphysical problems about identity here as well in terms of what makes a tribe the same tribe across time and value changes.

A second problem is that tribalism encourages irrational behavior. They can often act contrary to what seem to be their own interests and against the general welfare because of the dictates of their tribal leaders. On the positive side, tribal leaders could issue commands that do coincide with the interests of the tribal members and the general welfare. However, this would be a matter of luck.

A third problem is that tribalism makes it easy for authoritarians to gain ready-made followers who happily serve them, no matter how terrible they are. Because of these problems, it would seem best to find ways to counter tribalism.

One obvious solution is improving critical thinking, so that people can recognize the defects behind and of tribalism. However, mere logic is obviously enough—people also need training in goodness and commitment to virtue, as per Aristotle. But tribalism provides its members with a defense against critical thinking and training in the virtues.