One challenge in combatting fake news is developing a principled distinction between the fake and the real. One reason defense is to defend against the misuse of the term “fake news” to attack news on ideological or other irrelevant grounds. I make no pretense of being able to present a necessary and sufficient definition of fake news, but I will endeavor to provide a rough sketch. My approach is built around three attributes: intention, factuality, and methodology. I will consider each in turn.

While determining intent can be challenging, it has a role in distinguishing fake news from real news. An obvious comparison is to lying. A lie is not simply making an untrue claim but making it with an intent (typically malicious) to deceive. There are, of course, benign deceits, such as those of the arts.

There are some forms of “fake” news, namely those aimed at being humorous, that are benign. The Onion, for example, aims to entertain as does Duffel Blog and Andy Borowitz. Being comedic in nature, they fall under the protective umbrella of art: they say untrue things to make people laugh. Though they are technically fake news, they are benign in their fakery and hence should not be treated as malicious fake news.

Other fake news operators, such as those behind the stories about Comet Ping Pong Pizza, have different intentions. Some claim to create fake news with a benign intent, professing they want people to be more critical of the news. If this is their real intent, it has not worked out as they hoped. It is also worth considering that this is, at least in some cases, also a deceit that is like the “I was only joking” response when someone is called out for saying something awful.  As such, this sort of fake news is to be condemned.

Fake news is often created to make a profit. Since legitimate news agencies also intend to make a profit, this does not differentiate the fake from the real. However, those engaged in real news do not intend to deceive for profit, whereas the fake news operators use deceit as a tool in their money-making endeavors. This is to be condemned.

Others engage in fake news for ideological reasons or to achieve political goals; their intent is to advance their agenda with intentional deceits. The classic defense of this approach is utilitarian: the good done by the lies outweighs their harm (for the morally relevant beings). While truly noble lies might be morally justified, the usual lies of fake news do not aim at the general good, but the advancement of a specific agenda that will create more harm than good for most people. As this matter is so complicated, it is fortunate that the matter of fake news is much simpler: deceit presented as real news is fake news, even if it could be justified on utilitarian grounds.

In the case of real news, the intent is to present claims that are believed to be true. This might be with the goal of profit, but it is the intent to provide truth that makes the difference. Naturally, working out intent can be challenging, but there is a fact of the matter as to why people do what they do. Real news might also be presented with the desire to advance an agenda, but if the intent is also to provide truth, then the news would remain real.

In regard to factuality, an important difference between fake and real news is that the real news endeavors to offer facts and the fake news does not. A fact is a claim that has been established as true (to the requisite degree) and this is a matter of methodology, which will be discussed below.

Factual claims are claims that are objective. This means that they are true or false regardless of how people think, feel or believe about them. For example, the claim that the universe contains dark matter is a factual claim. Factual claims can, at least in theory, be verified or disproven. In contrast, non-factual claims are not objective and cannot be verified or disproven. As such, there can be no “fake” non-factual claims.

It might be tempting to protect the expression of values (moral, political, aesthetic and so on) in the news from accusations of being fake news by arguing that they are non-factual claims and thus cannot be fake news. The problem is that while many uncritically believe value judgments are not objective, this is a matter of philosophical dispute. To assume that value claims are not factual claims would be to beg the question. But, to assume they are would also beg the question. Since I cannot hope to solve this problem, I will instead endeavor to sketch a practical guide to the difference.

In terms of non-value factual claims of the sort that appear in the news, there are established methods for testing them. As such, the way to distinguish the fake from the real is by consideration of the methodology used (and applying the relevant method).

In the case of value claims, such as the claim that reducing the size of government is the morally right thing to do, there are not such established methods to determine the truth (and there might be no truth in this context). As such, while such claims and any arguments supporting them can be criticized, they should not be regarded as news as such. Thus, they could not be fake news.

As a final point, it is also worth considering the matter of legitimate controversy. There are some factual matters that are legitimately in dispute. While not all the claims can be right (and all could be wrong), this does not entail that the claims are fake news. Because of this, to brand one side or the other as being fake news simple because one disagrees with that side would be unjustified. For example, whether imposing a specific tariff would help the economy is a factual matter, but one that could be honestly debated. I now turn to methodology.

It might be wondered why the difference between fake and real news is not presented entirely in terms of one making fake claims and the other making true claims. The reason for this is that a real news could turn out to be untrue and fake news could turn out to be correct. In terms of real news errors, reporters do make mistakes, sources are not always accurate, and so on. By pure chance, a fake news story could get the facts right, but it would not be thus real news. The critical difference between fake and real news is thus the methodology. This can be supported by drawing an analogy to science.

What differentiates real science from fake science is not that one gets it right and the other gets it wrong. Rather, it is a matter of methodology. This can be illustrated by using the dispute over dark matter in physics. If it turns out that dark matter does not exist, this will not show that the scientists were doing fake science. It would just show that they were wrong. Suppose that instead of dark matter, what is really going on is that normal matter in a parallel universe is interacting with our universe. Since I just made this up, I would not be doing real science just because I happened to get it right.

Another analogy can be made to math. As any math teacher will tell you, it is not a matter of just getting the right answer, it is a matter of getting the right answer the right way. Hence the requirement of showing one’s work. A person could guess the answer and get it right; but they are not doing real math because they are not even doing math. Naturally, a person can be doing real math and still get the answer wrong.

Assuming these analogies hold, real news is a matter of methodology, a methodology that might fail. Many of the methods of real news are, not surprisingly, like the methods of critical thinking in philosophy. For example, there is the proper use of the argument from authority as the basis for claims. As another example, there are the methods of assessing unsupported claims against one’s own observations, one’s background information and against credible claims.

The real news uses this methodology and evidence of it is present in the news, such as identified sources, verifiable data, and so on. While a fake news story can also contain fakery about methodology, this is a key matter of distinction. Because of this, news that is based on the proper methodology would be real news, even if some might disagree with its content.

While fake news is often bizarre, one of the stranger fake claims was that the Comet Ping Pong pizzeria was part of a child sex ring led by Hillary Clinton. This fake story made the real news when Edgar M. Welch allegedly armed himself and went to the pizzeria to investigate the story. This  investigation led to gunfire, although no one was injured. Mr. Welch surrendered peacefully to the police after finding no evidence of the sex ring.

Given that the story had been debunked by the New York Times, Snopes, and the Washington Post, it might be wondered why someone would believe such a claim. Laying aside the debunking, it might also be wondered why anyone would believe such a seemingly absurd claim.

Some might be tempted to dismiss people who believe fake news as fools or stupid, most likely while congratulating themselves on their own intellectual prowess. While there is no shortage of fools and everyone is stupid at least some of the time, the “people are stupid” explanation does not suffice. After all, intelligent people of all political stripes are fooled by fake news.

One reason why fake news of this sort convinces people is that it makes use of the influence of repetition. While people tend to be skeptical of odd or implausible claims when they first encounter them, there is a psychological tendency to believe claims that are heard multiple times, especially from multiple sources. While the Nazis did not invent this technique, they did show its effectiveness as a rhetorical tool. The technique of repetition is used more benignly by teachers trying to get people to memorize things. Not surprisingly, politicians and pundits also use this method under the label of “talking points.”

This psychological tendency presumably has some value. When people are honest, things that are repeated and come from multiple sources would generally be true (or at least not deceits). The repetition method also exploits a standard method of reasoning: checking with multiple sources for confirmation. However, such confirmation requires using reliable sources that do not share the same agenda. Getting multiple fake news sites reporting the same fake story creates pseudo-confirmation that creates an illusion of plausibility. The defense against this is, of course, to have diverse sources of news and preferably some with less ideological slant.  It is also useful to ask yourself this question: “although I have heard this many times, is there evidence it is true?”

Another reason fake news can be convincing is that the fake news sites often engage in active defense of their fake news. This includes using other fake sources to “confirm” their stories, attacks on the credibility of real news sources, and direct attacks on articles by real news sources that expose a fake news story. This defense creates the illusion that the fake news stories are real and that the real news stories are fake.

Some of these work through psychology: one might think that such a defense would only be mounted if there was truth worthy of the effort. Some appeal to reason: if the real news story exposing fake news is systematically torn down step by step, this creates the illusion of a reasoned argument disproving the claim that the fake story is fake. Attempts to discredit the sources also misuse legitimate critical assessment methods. The fake news sites accuse the real sources of news of being biased, bought and so on. These are legitimate concerns when assessing a source; the problem is not the method but the fact that the claims are also typically untrue.

Those who do not want to be duped can counter this fake news defense by the usual method of checking multiple, diverse and reliable sources. But this is increasingly difficult as fake news sites proliferate and grow more sophisticated.

A third reason that fake news can seem accurate is that it has supporters who use social media to defend the fake stories and attack the real news. Some of these people are honest in that they believe they are saying true things. Others are aware the news is fake. Some even create fake identities to make themselves appear credible. For example, one defender of Pizzagate identified himself as “Representative Steven Smith of the 15th District of Georgia.” Georgia has only 14 districts; but most people would not know this. All these supporters create the illusion of credibility, making it difficult for people to ferret out the truth. After all, most people expect other people to be honest and get basic facts right most of the time as this is a basic social agreement and a foundation of civilization. Fake news, among its other harms, is eroding this foundation.

The defense against this is to research the sources defending a news story. If the defenders are mostly fake themselves, this would indicate that the news story might be fake. However, fake defenders do not prove the story is fake and it is easy to imagine the tactic of using fake defenders to make people feel that a real news is fake. For example, a made up radical “liberal source” defending a story might be used to try to make conservatives feel that a real news story is fake.

A fourth reason that fake news can seem accurate is that the real news has been subject to sustained attacks, mostly from the political right in the United States. Republicans have made the claim that the media is liberally biased a stock talking point, which has no doubt influenced people. Trump took it even further, accusing the news of being terrible people and liars (ironically for reporting that his lies are lies). Given the sustained attack on news, it is no wonder that many people do not regard the real news as reliable. As such, the stories that debunk the fake news are typically rejected because they are the result of liberal bias. This does, of course, make use of a legitimate method of assessing sources: if a source is biased, then it loses credibility. The problem is that rather than being merely skeptical about the mainstream media, many people reject its claims uncritically because of the alleged bias. This is not a proper application of the method as the doubt needs to be proportional to the evidence of bias.

In regard to people believing in seemingly absurd claims, there are both good and bad reasons for this. One good reason is that there are enough cases of the seemingly absurd turning out to be true. In the case of Pizzagate, people hearing about it probably had stories about Jared Fogle and Bill Cosby in mind. They probably also heard stories about cases of real sex rings. Give this background, the idea that Hillary Clinton was tied to a sex-ring might seem to have some plausibility. However, the use of such background information should also be tempered by other background information, such as information about how unlikely it is that Hillary Clinton was running sex-ring out of the basement of a pizza place. That has no basement.

The bad reason is that people have a psychological tendency to believe what matches their ideology and existing opinions. So, people who already disliked Hillary Clinton would tend to find such stories appealing as they would feel true. Such psychological bias is hard to fight,  people take strong feelings as proof and often double down in the face of facts to the contrary. Defending against bias is probably the hardest method as it requires training and practice in being aware of how feelings are impacting the assessment of a claim and developing the ability to go into a “neutral” assessment mode.

Given that fake news is spreading like a plague, it is wise to develop defenses against it to avoid being duped, perhaps to the point where one is led to commit crimes because of lies.

While analyzing the impact of fake news in  American elections will be an ongoing project, there are excellent reasons to believe it has been a real factor. For example, BuzzFeed’s analysis showed how the fake news stories outperformed real news stories in 2016.  When confronted with the claim that fake news on Facebook influenced the election results, Mark Zuckerberg’s initial reaction was denial. However, as critics have pointed out, to say that Facebook does not influence people is to tell advertisers that they are wasting their money on Facebook. While this might be the case, Zuckerberg cannot consistently pitch the influence of Facebook to his customers while denying that it has such influence. One of these claims must be mistaken.

While my own observations do not constitute a proper study, I routinely observed people on Facebook treating fake news stories as if they were real.  In some cases, these errors were humorous as people had mistaken satire for real news. In other cases, they were not so funny as people were enraged over things that had not actually happened, such as Trump’s lies about migrants. There is also the fact that public figures (such as Trump) and pundits repeat fake news stories acquired from Facebook (and other sources). As such, fake news is a real problem on Facebook. As is AI slop.

As president elect, Trump has continued to spew untruths and the attacks on the mainstream media continue and have even escalated in his second term. The ecosystem is ideal for fake news to thrive. As such, it seems likely that while the fake news will decline to some degree, it will remain a factor as long as it is influential or profitable. This is where Facebook comes in. While fake news sites can always have their own web pages, Facebook serves up the fake news to a huge customer base and thus drives the click-based profits (thanks to things like Google advertising) of these sites. This powerful role of Facebook gives rise to moral concerns about its accountability.

One obvious approach is to claim that Facebook has no moral responsibility in regards to policing fake news. This could be argued by drawing an analogy between Facebook and a delivery company like UPS or FedEx. Rather than delivering physical packages, Facebook is delivering news.

A delivery company is responsible for delivering a package intact and within the specified time. However, it does not have a moral responsibility regarding what is shipped. Suppose, for example, that businesses arose selling “Artisanal Macedonian Pudding” and purport that it is real pudding. But, in fact, it is a blend of sugar and feces  that looks like pudding. Some customers fail to recognize it for what it is and happily shovel it into their pudding port; probably getting sick, but still loving the taste. If the delivery company were criticized for delivering the pudding, they would be right to say that they are not responsible for the “pudding” as they merely deliver packages. The responsibility lies with the “pudding” companies. And the customers for not recognizing sugary feces as feces. If the analogy holds, then Facebook is just delivering fake news like a delivery company delivering “Macedonian Pudding” and is not morally responsible for the contents of the packages.

A possible counter to this is that once Facebook knows that a site is a fake news site, then they are morally responsible for continuing to deliver the fake news. Going with the delivery analogy, once the delivery company is aware that “Artisanal Macedonian Pudding” is sugar and feces, they have a moral obligation to cease their business with those making this dangerous product. This could be countered by arguing that if the customer wants the package of “pudding”, then it is morally fine for the delivery company to provide it. However, this would seem to require that the customer knows they are getting sugar and feces—otherwise the delivery company is knowingly participating in a deceit and the distribution of a harmful product. This would seem to be morally wrong.

Another approach to countering this argument is to use a different analogy: Facebook is not like a delivery company; it is like a restaurant selling the product. Going back to the “pudding”, a restaurant that knowingly purchased and served sugar and feces as pudding would be morally accountable for this misdeed. By this analogy, once Facebook knows they are profiting from selling fake news, they are morally accountable and in the wrong if they fail to address this. A possible response to this is to contend that Facebook is not selling the fake news; but this leads to the question of what Facebook is doing.

One way to look at Facebook is that the fake news is just like advertising in any other media. In this case, the company selling the ad is not morally accountable for the content of the ad of the quality of the product. Going back to the “pudding”, if one company is selling sugar and feces as pudding, the company running the advertising is not morally responsible. The easy counter to this is that once the company selling the ads knows that the “pudding” is sugar and feces, then they would be morally wrong to be a party to this harmful deception. Likewise for Facebook treating fake news as advertising.

Another way to look at Facebook is that it is serving as a news media company and is in the business of providing the news.  Going back to the pudding analogy, Facebook would be in the pudding business as a re-seller, selling sugar and shit as real pudding. This would seem to oblige Facebook to ensure that the news it provides is accurate and to not distribute news it knows is fake. This assumes a view of journalistic ethics that is obviously not universally accepted, but a commitment to the truth seems to be a necessary bedrock to any worthwhile media ethics. 

While fake news presumably dates to the origin of news, the 2016 United States presidential election saw a huge surge in the volume of fakery. While some of it arose from partisan maneuvering, the majority seems to have been driven by the profit motive: fake news drives revenue generating clicks. While the motive might have been money, there has been serious speculation that the fake news (especially on Facebook) helped Trump win the 2016 election. While those who backed Trump would presumably be pleased by this outcome, the plague of fake news should be worrisome to anyone who values the truth, regardless of their political ideology. After all, fake news could presumably be just as helpful to the left as the right. That said, the right lies while the mainstream left remains silent. In any case, fake news is damaging and is worth combating.

While it is often claimed that most do not have the time to be informed about the world, if someone has the time to read fake news, then they have the time to think critically about it. This critical thinking should, of course, go beyond just fake news and should extend to all important information. Fortunately, thinking critically about claims is surprisingly quick and easy.

I have been teaching students to be critical about claims in general and the news in particular for over two decades and what follows is based on what I teach in class (drawn, in part, from the text I have used: Critical Thinking by Moore & Parker). I would recommend this book for general readers if it was not, like most textbooks, absurdly expensive. But, to the critical thinking process that should be applied to claims in general and news in particular.

While many claims are not worth the effort to check, others are important enough to subject to scrutiny. When applying critical thinking to a claim, the goal is to determine whether you should rationally accept it as true, reject it as false or suspend judgment. There can be varying degrees of acceptance and rejection, so it is also worth considering how confident you should be in your judgment.

The first step in assessing a claim is to match it against your own observations, should you have relevant observations. While observations are not infallible, if a claim goes against what you have directly observed, then that is a strike against accepting the claim. This standard is not commonly used in the case of fake news because most of what is reported is not something that would be observed directly by the typical person. That said, sometimes this does apply. For example, if a news story claims that a major riot occurred near where you live and you saw nothing happen there, then that would indicate the story is in error.

The second step in assessment is to judge the claim against your background information. This is all your relevant beliefs and knowledge about the matter. The application is straightforward and just involves asking yourself if the claim seems plausible when you give it some thought. For example, if a news story claims that Joe Biden plans to start an armed rebellion against Trump, then this should be regarded as wildly implausible by anyone with true background knowledge about Biden.

There are, of course, some obvious problems with using background information as a test. One is that the quality of background information varies and depends on the person’s experiences and education (this is not limited to formal education). Roughly put, being a good judge of claims requires already having accurate information stored away in your mind. All of us have many beliefs that are false; the problem is that we generally do not know they are false. If we did, then we would no longer believe them. Probably.

A second point of concern is the influence of wishful thinking. This is a fallacy (an error in reasoning) in which a person concludes that a claim is true because they want it to be true. Alternatively, a person can fallaciously infer that a claim is false because they want it to be false. This is poor reasoning because wanting a claim to be true or false does not make it so. Psychologically, people tend to disengage their critical faculties when they really want something to be true (or false).

For example, someone who really hates Trump would want to believe that negative claims about him are true, so they would tend to accept them uncritically. As another example, someone who really likes Trump would want positive claims about him to be true, so they would accept them without thought.

The defense against wishful thinking of this sort is to be on guard against yourself by being aware of your biases. If you really want something to be true (or false), ask yourself if you have any reason to believe it beyond just wanting it to be true (or false). For example, I am not a fan of Trump and thus would tend to want negative claims about him to be true. So, I must consider that when assessing such claims. Unfortunately for America, much of what Trump claims is objectively untrue.

A third point of concern is related to wishful thinking and could be called the fallacy of fearful or hateful thinking. While people tend to believe what they want to believe, they also tend to believe claims that match their hates and fears. That is, they engage in the apparent paradox of believing what they do not want to believe. Fear and hate impact people in a very predictable way: they make people stupid when it comes to assessing claims.

For example, there are Americans who hate and fear that migrants will eat cats and dogs. While they would presumably wish that claims about this were false, they will often believe such claims because they correspond with their hate and fear. Ironically, their great desire for it to not be true motivates them to feel that it is true, even when it is not.

The defense against this is to consider how a claim makes you feel. If you feel hatred or fear, you should be very careful in assessing the claim. If a news claim seems tailored to push your buttons, then there is a decent chance that it is fake news. This is not to say that it must be fake, just that it is important to be extra vigilant about claims that are extremely appealing to your hates and fears. This is a very hard thing to do since it is easy to be ruled by hate and fear.

The third step involves assessing the source of the claim. While the source of a claim does not guarantee the claim is true (or false), reliable sources are obviously more likely to get things right than unreliable sources. When you believe a claim based on its source, you are making use of what philosophers call an argument from authority. The gist of this reasoning is that the claim being made is true because the source is a legitimate authority on the matter. While people tend to regard as credible sources those that match their own ideology, the rational way to assess a source involves considering the following factors.

First, the source needs to have sufficient expertise in the subject matter in question. One rather obvious challenge here is being able to judge whether the specific author or news source has sufficient expertise. In general, the question is whether a person (or the organization in general) has the relevant qualities and these are assessed in terms of such factors as education, experience, reputation, accomplishments and positions. In general, professional news agencies have such experts. While people tend to dismiss Fox, CNN, and MSNBC depending on their own ideology, their actual news (as opposed to editorial pieces or opinion masquerading as news) tends to be factually accurate. Unknown sources tend to be lacking in these areas. It is also wise to be on guard against fake news sources pretending to be real sources. This can be countered by checking the site address against the official and confirmed address of professional news sources.

Second, the claim made needs to be within the source’s area(s) of expertise. While a person might be very capable in one area, expertise is not universal. So, for example, a businessman talking about her business would be an expert, but if she is regarded as a reliable source for political or scientific claims, then that would be an error (unless she also has expertise in these areas).

Third, the claim should be consistent with the views of the majority of qualified experts in the field. In the case of news, using this standard involves checking multiple reliable sources to confirm the claim. While people tend to pick their news sources based on their ideology, the basic facts of major and significant events would be quickly picked up and reported by all professional news agencies such as Fox News, NPR and CNN. If a seemingly major story does not show up in the professional news sources, there is a good chance it is fake news.

 It is also useful to check with the fact checkers and debunkers, such as Politifact and Snopes. While no source is perfect, they do a good job assessing claims. Something that does not make liars very happy. If a claim is flagged by these reliable sources, there is an excellent chance it is not true.

Fourth, the source must not be significantly biased. Bias can include such factors as having a very strong ideological slant (such as MSNBC and Fox News) as well as having a financial interest in the matter. Fake news is typically crafted to feed into ideological biases, so if an alleged news story seems to fit an ideology too well, there is a decent chance that it is fake. However, this is not a guarantee that a story is fake. Reality sometimes matches ideological slants. This sort of bias can lead real news sources to present fake news; you should be critical even of professional sources-especially when they match your ideology.

While these methods are not flawless, they are very useful in sorting out the fake from the true. While I have said this before, it is worth repeating that we should be even more critical of news that matches our views. This is because when we want to believe, we tend to do so too easily.

While I was required to take Epistemology in graduate school, I was not interested in the study of knowledge until I started teaching it. While remaining professionally neutral in the classroom, I now include a section on the ethics of belief in my epistemology class and discuss, in general terms, such things as tribal epistemology. Outside of the classroom I am free to discuss my own views on epistemology in the context of politics, and it is a fascinating subject. My younger self from graduate school would be surprised at the words “epistemology” and “fascinating” used together.

While COVID-19 was a nightmare for the world, the professed beliefs of Trump supporters about the pandemic provides an excellent case study in belief. As anyone familiar with these beliefs knows, they form a strange set of inconsistent and even contradictory claims. I am not claiming that every Trump supporter believes all these claims and I am not claiming that only Trump supporters believe them; but these are all claims professed by those who support Trump.

At the start of the pandemic Trump placed the blame on China and referred to the “the China virus.” His supporters generally accepted this view. The role of China varies depending on which explanation is offered. Some make the true claim that it originated in China. Others make the unsupported claim that it escaped (or was released intentionally) from a lab. On this view, the virus is generally presented as something bad. After all, it makes no sense to blame China unless the virus is a real problem.

There are also other conspiracy theories about the pandemic. One infamous theory is that the pandemic was real but caused by 5G. This would be inconsistent with the China virus theory; but one could preserve the China link by claiming that 5G technology is made in China

Trump also advanced the idea that the pandemic did not exist, that it was a hoax. This was echoed by his supporters—although some also advanced the theory that the Democrats infected Trump with the virus. The hoax idea was presented in various ways. For example, on some accounts the virus does exist but is no worse than the flu. This view led to an active anti-mask movement and death threats against public health experts. The anti-mask views make sense if one thinks the virus was a hoax but makes less sense if one thinks that the virus was bad enough to warrant making China pay. If it was a hoax perpetrated by the Democrats, then it makes no sense to hold China accountable. And if the virus did real damage and China should pay, then it makes no sense to claim it is a hoax. To be fair, these could be combined into the claim that China and the Democrats ran a worldwide hoax with the cooperation of all governments to harm Trump. Reconciling the 5G theory with the hoax theory would be challenging: if 5G was the cause of the pandemic, then it was not a hoax. And if it was a hoax, there was no pandemic for 5G to cause.

While Trump supporters profess to believe the pandemic was a hoax, over 80% of Republicans claimed to believe that Trump has done a great job with the pandemic.  His supporters claimed that he took rapid action (he did not) and that his response was very effective (it was not). Trump has also attempted to take credit for the forthcoming vaccines and has claimed, without evidence, that the FDA and Democrats stalled the vaccines. If the pandemic was a hoax, then it makes no sense to claim that Trump acted rapidly and effectively to counter the pandemic.  This is because there would be no pandemic to counter. It could be claimed that Trump acted to counter the hoax, but this would be hard to reconcile with Trump’s claims about the vaccine. If the pandemic was a hoax, then there was no need for a vaccine and taking credit for a useless vaccine would be silly. A Trump supporter could take the view that the pandemic was no worse than the flu and then credit Trump with addressing something no worse than the flu and developing the equivalent of a flu vaccine. But to the degree that Trump downplayed (lied about) the pandemic, this would undercut claims of how significant his alleged success should be considered.

As I noted earlier, I am not claiming that every Trump supporter believes all these claims. For example, the 5G pandemic theory was not universally embraced by Trump supporters (and is surely held by some who do not support him). However, Trump supporters generally seem to profess belief in many of these claims even though they are not consistent, and some would seem to lead to contradictions.

In logic, two claims are inconsistent when both could be false, but both cannot be true. To use my usual example, the claim that my water bottle contains only vodka and the claim that it contains only water are inconsistent with each other. If the bottle contains only vodka, then it does not contain only water and vice versa. But both could be false: the bottle could be empty. Or it could contain tequila. Many of the claims Trump supporters profess to believe about the pandemic seem inconsistent. For example, the claim that the pandemic was caused by 5G is not consistent with the claim that it is a hoax.

In logic, two claims contradict one another when one of them must be false and the other must be true. A contradiction is a claim that must be false and is false because of its logical structure. The stock example in logic is the conjunction P & -P. Since a conjunction is true when the two claims being conjoined are true and false otherwise, this claim is always false, at least on the assumption that any claim is true or false (but not both). So, if P is true, then -P must be false (and vice versa). Some of the claims Trump supporters profess to believe would seem to entail contradictory claims. For example, if it is claimed that the pandemic was caused by 5G, then this would entail that the pandemic is not a hoax which would contradict the claim that it is a hoax. Naturally, one could argue that the pandemic was caused by 5G and is also a hoax provided that the nature of the hoax is defined in a way that allows it to be caused by 5G.  As another example, the conspiracy theory that the pandemic was caused by a bioweapon released (intentionally or not) by China (or someone else) would entail that it was not a hoax. This would contradict the claim that it is a hoax. Again, one could try to craft the hoax claims in a way that the pandemic is both a hoax and caused by a bioweapon. Claiming that it is a hoax about a bioweapon would not do this, since a hoax about a bioweapon is not a bioweapon it is just a hoax.

From the standpoint of truth-functional logic (a logic in which the truth of a claim depends on the truth of the parts), the claims made by Trump supporters about the pandemic cannot all be true. In science-fiction, a robot or computer that attempted to accept all these claims would suffer some sci-fi logic failure, perhaps exploding. In reality, mapping out the logical relations between these claims would show that they cannot all be true and there would be no explosions (one hopes). But there is the interesting question of how people can hold to beliefs that cannot all be true and some of which lead to contradictions.

In philosophy, epistemologists (and others) often speak of beliefs as having intentionality. That is, beliefs have aboutness. When a person believes something about their world, they take their belief to correspond to reality. But while a belief has aboutness it need not be about reality. As an example, if Ted believes in unicorns, his belief is about unicorns (although philosophers disagree about beliefs about things that are not real) but not about real unicorns. Because there are no unicorns. People can also believe that all the claims in a set are true, even though it is not possible for them all to be true. That is, that set contains beliefs that are inconsistent with each other (or even contradictory). A person can even believe that a contradiction is true. Unlike truth functional logic, the truth of the claim “Person A believes claim C” does not depend on the truth of the parts; only on the truth of the claim about A believing C. A crude way to look at the matter is to see belief as like a Word file in which one can type any sentence rather than being like a computer program or circuit design that would fail if it contained logical inconsistencies or contradictions. So, saying that a person believes something is like saying it is in their Word file. Humans are clearly able to believe sets of inconsistent claims and even act on those beliefs which raises many interesting questions about belief formation and how belief impacts actions. As a closing point, people can certainly reconcile apparently inconsistent beliefs by not really believing in some or all of them professing that a claim is true when one believes it is not. That is, lying.

After each eruption of gun violence, there is also a corresponding eruption in the debates over gun issues. As with all highly charged issues, people are primarily driven by their emotions rather than by reason. Being a philosopher, I like to delude myself with the thought that it is possible to approach an issue rationally. Like many other philosophers, I am irritated when people say things like “I feel that there should be more gun control” or “I feel that gun rights are important. Because of this, when I read student papers I strike through all “inappropriate” uses of “feel” and replace them with “think.” This is, of course, done with a subconscious sense of smug superiority. Or so it was before I started reflecting on emotions in the context of gun issues. In this essay I will endeavor a journey through the treacherous landscape of feeling and thinking in relation to gun issues. I’ll begin with arguments.

As any philosopher can tell you, an argument consists of a claim, the conclusion, that is supposed to be supported by the evidence or reasons, the premises, that are given. In the context of logic, as opposed to that of persuasion, there are two standards for assessing an argument. The first is an assessment of the quality of the logic: determining how well the premises support the conclusion. The second is an assessment of the plausibility of the premises: determining the quality of the evidence.

On the face of it, assessing the quality of the logic should be an objective matter. For deductive arguments (arguments whose premises are supposed to guarantee the truth of the conclusion), this is the case. Deductive arguments can be checked for validity using such things as Venn diagrams, truth tables and proofs. If a person knows what she is doing, she can confirm beyond all doubt whether a deductive argument is valid or not. A valid argument is an argument such that if its premises were true, then its conclusion must be true. While a person might stubbornly refuse to accept a valid argument as valid, this would be as foolish as stubbornly refusing to accept that 2+2= 4 or that triangles have three sides. As an example, consider the following valid argument:

 

Premise 1: If an assault weapon ban would reduce gun violence, then congress should pass an assault weapon ban.

Premise 2: An assault weapon ban would reduce gun violence.

Conclusion: Congress should pass an assault weapon ban.

 

This argument is valid; in fact, it is an example of the classic deductive argument known as modus ponens or affirming the antecedent. As such, questioning the logic of the argument would just reveal one’s ignorance of logic. Before anyone gets outraged, it is important to note that an argument being valid does not entail that any of its content is true. While this endlessly confuses students, though a valid argument that has all true premises must have a true conclusion, a valid argument need not have true premises or a true conclusion. Because of this, while the validity of the above argument is beyond question, one could take issue with the premises. They could, along with the conclusion, be false although the argument is unquestionably a valid deductive argument. For those who might be interested, an argument that is valid and has all true premises is a sound argument. An argument that does not meet these conditions is unsound.

Unfortunately, there is usually no perfect, objective test for the truth of a premise. In general, premises are assessed in terms of how well they match observations, background information and credible claims from credible sources (which leads to concerns about determining credibility). As should be expected, people tend to prefer premises that match their feelings. This is true for everyone, be that person the head of the NRA or a latte sipping liberal academic who trembles at the thought of even seeing a gun. Because of this, a person who wants to fairly and justly assess the premises of any argument must be willing to understand their own feelings and work out how they influence their judgment. Since people, as John Locke noted in his classic essay on enthusiasm, tend to evaluate claims based on the strength of their feelings, doing this is difficult. People think they are right because they feel strongly about something and are least likely to engage in critical assessment when they feel strongly.

While deductive logic allows for perfectly objective assessment, it is not the logic that is commonly used in debates over political issues or in general. The most used logic is inductive logic.

Inductive arguments are arguments, so an inductive argument will have one or more premises that are supposed to support a conclusion. Unlike deductive arguments, inductive arguments do not offer certainty and instead deal in likelihood. A logically good inductive argument is called a strong argument: one whose premises, if true, would probably make the conclusion true. A bad inductive argument is a weak one. Unlike the case of validity, the strength of an inductive argument is judged by applying the standards specific to that sort of inductive argument to the argument in question. Consider, as an example, the following argument:

 

Premise 1: Tens of thousands of people die each year as a result of automobiles.

Premise 2: Tens of thousands of people die each year as a result of guns.

Premise 3: The tens of thousands of deaths by automobiles are morally acceptable.

Conclusion: The tens of thousands of deaths by gun are also morally acceptable.

 

This is a simple argument by analogy in which it is argued that since cars and guns are alike, if we accept automobile fatalities then we should also accept gun fatalities. Being an inductive argument, there is no perfect, objective test to determine whether the argument is strong or not. Rather, the argument is assessed in terms of how well it meets the standards of an argument by analogy. The gist of these standards is that the more alike the two things (guns and cars) are alike, the stronger the argument. Likewise, the less alike they are, the weaker the argument.

While the standards are reasonably objective, their application admits considerable subjectivity. In the case of guns and cars, people will differ in terms of how they see them in regard to similarities and differences. As would be suspected, the lenses through which people see this matter will be deeply colored by their emotions and psychological backstory. As such, rationally assessing inductive arguments is especially challenging: a person must sort through the influence of emotions and psychology on her evaluation of both the premises and the reasoning. Since arguments about guns are generally inductive, it is no wonder it is a messy, even on the rare occasions when people are sincerely trying to be rational and objective.

The lesson here is that a person needs to think about how she feels before she can think about what she thinks. Since this also applies to me, my next essay will be about exploring my psychological backstory in regard to guns.

Back in 2016 my husky, Isis, and I had slowed down since we teamed up in 2004 because pulling so many years will slow down man and dog. While Isis faced a crisis, most likely due to the wear of time on her spine, the steroids she was prescribed helped address the pain and inflammation and  for a while she was tail up and bright eyed once more.

In my previous essay I looked at using causal reasoning on a small sale by applying the methods of difference and agreement. In this essay I will look at thinking critically about experiments and studies.

The gold standard in science is the controlled cause to effect experiment. The objective of an experiment is to determine the effect of a cause. As such, the question is “I wonder what this does?” While conducting such an experiment can be complicated and difficult, the basic idea is simple.

The first step is to have a question about a causal agent. For example, it might be wondered what effect steroids have on arthritis in elderly dogs. The second step is to determine the target population, which might already be taken care of in the first step, in the example, elderly dogs would be the target population. The third step is to pull a random sample from the target population. This sample needs to be representative, which means it needs to be like the target population. For example, a sample from the population of elderly dogs would ideally include all breeds of dogs, male dogs, female dogs, and so on for all relevant qualities of dogs. If a sample is not properly taken it can be biased. The problem with a biased sample is that the inference will be weak because the sample might not be adequately like the general population. The sample also needs to be large enough. A sample that is too small will also fail to adequately support the inference drawn from the experiment.

The fourth step involves splitting the sample into the control group and the experimental group. These groups need to be as similar as possible (and can be made of the same individuals). The reason they need to be alike is because in the fifth step the experimenters introduce the cause (such as steroids) to the experimental group and the experiment is run to see what difference this makes between the two groups. The final step is getting the results and determining if the difference is statistically significant. This occurs when the difference between the two groups can be confidently attributed to the presence of the cause (as opposed to chance or other factors). While calculating this can be complicated, when assessing an experiment (such as a clinical trial) it is easy enough to compare the number of individuals in the sample to the difference between the experimental and control groups. This handy table from Critical Thinking makes this easy and also shows the importance of having a large enough sample.

 

Number in Experimental Group

 (with similarly sized control group)

Approximate Figure That the difference Must Exceed

To Be Statistically Significant

(in percentage points)

10 40
25 27
50

19

100 13
250 8
500 6
1,000 4
1,500 3

 

Many “clinical trials” mentioned in articles and blog posts have very small samples sizes and this can make their results all but meaningless. This table also shows why anecdotal evidence is fallacious: a sample size of one is useless when it comes to an experiment.

The above table also assumes that the experiment is run correctly: the sample was representative, the control group was adequately matched to the experimental group, the experimenters were not biased, and so on for all the relevant factors. As such, when considering the results of an experiment it is important to consider those factors as well. If, for example, you are reading an article about an herbal supplement for arthritic dogs and it mentions a clinical trial, you would want to check on the sample size, the difference between the two groups and determine whether the experiment was also properly conducted. Without this information, you would need to rely entirely on the credibility of the source. If the source is credible and claims that the experiment was conducted properly, then it would be reasonable to trust the results. If the source’s credibility is in question, then trust should be withheld. Assessing credibility is a matter of determining expertise and the goal is to avoid being a victim of a fallacious appeal to authority. Here is a short checklist for determining whether a person (or source) is an expert or not:

 

  • The person has sufficient expertise in the subject matter in question.
  • The claim being made by the person is within her area(s) of expertise.
  • There is an adequate degree of agreement among the other experts in the subject in question.
  • The person in question is not significantly biased.
  • The area of expertise is a legitimate area or discipline.
  • The authority in question must be identified.

 

While the experiment is the gold standard, there are times when it cannot be used. In some cases, this is a matter of ethics: exposing people or animals to something potentially dangerous might be deemed morally unacceptable. In other cases, it is a matter of practicality or necessity. In such cases, studies are used.

One type of study is the non-experimental cause to effect study. This is identical to the cause to effect experiment with one critical difference: the experimental group is not exposed to the cause by those running the study. For example, a study might be conducted of dogs who recovered from Lyme disease to see what long term effects it has on them. It would be cruel to give dogs Lyme disease to study its effects, although researchers often try to justify such cruelty in the name of progress.

The study, as would be expected, runs in the same basic way as the experiment and if there is a statistically significant difference between the two groups (and it has been adequately conducted) then it is reasonable to make the relevant inference about the effect of the cause in question.

While useful, the study is weaker than the experiment. This is because those conducting the study must take what they get as the experimental group is already exposed to the cause and this can create problems in properly sorting out the effect of the cause in question. As such, while a properly run experiment can still get erroneous results, a properly run study is even more likely to have issues.

A second type of study is the effect to cause study. It differs from the cause to effect experiment and study in that the effect is known but the cause is not. Hence, the goal is to infer an unknown cause from the known effect. It also differs from the experiment in that those conducting the study obviously do not introduce the cause.

This study is conducted by comparing the experimental group and the control group (which are ideally, as similar as possible) to sort out a likely cause by considering the differences between them. As would be expected, this method is less reliable than the others since those doing the study are trying to backtrack from an effect to a cause. If considerable time has passed since the suspected cause, this can make the matter even more difficult to sort out. The conducting the study also must work with the experimental group they happen to get and this can introduce complications into the study, making a strong inference problematic.

An example of this would be a study of elderly dogs who suffer from paw knuckling (the paw flips over so the dog is walking on the top of the paw) to determine the cause of this effect. As one might suspect, finding the cause would be challenging as there would be a multitude of potential causes in the history of the dogs ranging from injury to disease. It is also likely that there are many causes in play here, and this would require sorting out the different causes for this same effect. Because of such factors, the effect to cause study is the weakest of the three and supports the lowest level of confidence in its results even when conducted properly. This explains why it can be so difficult for researchers to determine the causes of many problems that, for example, elderly dogs suffer from.

In the case of Isis, the steroids that she was taking were well-studied, so it was quite reasonable for me to believe that they were a causal factor in her remarkable but all too brief recovery. I do not, however, know for sure what caused her knuckling as there are so many potential causes for that effect. However, the important thing is that she was able to walk normally about 90% of the time and her tail was back in the air, showing that she was a happy husky.

As mentioned in my previous essay, Isis (my Siberian husky) fell victim to the ravages of time. Once a sprinting blur of fur, she was reduced to sauntering. Still, lesser beasts feared her (and to a husky, all creatures are lesser beasts) and the sun was warm in the backyard, so her life was good even at the end.  

Faced with the challenge of keeping her healthy and happy, I relied a great deal on what I learned as a philosopher. As noted in the preceding essay, my philosophical skills kept me from falling victim to the post hoc fallacy and the fallacy of anecdotal evidence. In this essay I will focus on two basic, but extremely useful methods of causal reasoning.

One of the most useful tools for causal reasoning is the method of difference. This method was famously developed by the philosopher John Stuart Mill and has been a staple in critical thinking classes since before my time. The purpose of the method is figuring out the cause of an effect, such as a husky suffering from a knuckling paw (a paw that folds over, so the dog is walking on the top of the foot rather than the bottom). The method can also be used to try to sort out the effect of a suspected cause, such as the efficacy of an herbal supplement in treating canine arthritis.

Fortunately, the method is simple. To use it, you need at least two cases: one in which the effect has occurred and one in which it has not. In terms of working out the cause, more cases are better, although more cases of something bad (like arthritis pain) would be undesirable from other standpoints. The two cases can involve the same individual at different times as it need not be different individuals (though it also works in those cases as well). For example, when sorting out Isis’ knuckling problem the case in which the effect occurred was when Isis was suffering from knuckling and the case in which it did not was when Isis was not suffering from this problem. I also investigated other cases in which dogs suffered from knuckling issues and when they did not.

The cases in which the effect is present and those in which it is absent are then compared to determine the difference between the cases. The goal is to sort out which factor or factors made the difference. When doing this, it is important to keep in mind that it is easy to fall victim to the post hoc fallacy and conclude without adequate evidence that a difference is a cause because the effect occurred after that difference. Avoiding this mistake requires considering that the “connection” between the suspected cause and the effect might be a coincidence. For example, Isis ate some peanut butter the day she started knuckling, but it is unlikely that had any effect, especially since she ate peanut butter since we became a pack. It is also important to consider that an alleged cause might be an effect caused by a factor that is also producing the effect one is concerned about. For example, a person might think that a dog’s limping is causing knuckling, but they might both be effects of a third factor, such as arthritis or nerve damage.

You must also keep in mind the possibility of reversed causation, which is when the alleged cause is the effect. For example, a person might think that limping is causing knuckling, but it might turn out that the knuckling is the cause of the limping.

In some cases, sorting out the cause can be easy. For example, if a dog slips and falls, then has trouble walking, the most likely cause is the fall. But it could still be something else. In other cases, sorting out the cause can be difficult. It might be because there are many possible causal factors. For example, knuckling can be caused by many things (even Lyme disease). It might also be because there are no clear differences (such as when a dog starts limping with no clear preceding event). One useful approach is to do research using reliable sources. Another, which is a good idea with pet problems, is to refer to an expert, such as a vet. Medical tests, for example, are useful for sorting out the differences and finding a likely cause.

The same basic method can also be used in reverse, such as determining the effectiveness of a dietary supplement for treating canine arthritis. For example, when Isis started slowing down and showing signs of soreness, I started giving her senior dog food, glucosamine and extra protein. What followed was an improvement in her mobility and the absence of soreness. While the change might have been a mere coincidence, it is reasonable to consider that one or more of these factors helped her. After all, there is some scientific evidence that diet can have an influence on these things. From a practical standpoint, I decided to keep to this plan since the cost of the extras is low, they have no harmful side effects, and there is some indication that they work. I did consider that I could be wrong. Fortunately, I did have good evidence that the steroids Isis was prescribed worked as she made a remarkable improvement after starting them and there is solid scientific evidence that they are effective at treating pain and inflammation. As such, it is rational to accept that the steroids were the cause of her improvement, though this could also be a coincidence.

The second method is the method of agreement. Like difference, this requires at least two cases. Unlike difference, the effect is present in all cases. In this method, the cases exhibiting the effect (such as knuckling) are considered to find a common thread in all the cases. For example, each incident of knuckling would be examined to determine what they all have in common. The common factor (or factors) that is the most plausible cause of the effect is what should be taken as the likely cause. As with the method of difference, it is important to consider such factors as coincidence to avoid falling into a post hoc fallacy.

The method of agreement is most often used to form a hypothesis about a likely cause. The next step is, if possible, to apply the method of difference by comparing similar cases in which the effect did not occur. Roughly put, the approach would be to ask what all the cases have in common, then determine if that common factor is absent in cases in which the effect is also absent. For example, a person investigating knuckling might begin by considering what all the knuckling cases have in common and then see if that common factor is absent in cases in which knuckling did not occur.

One of the main weaknesses of these methods is that they tend to have very small sample sizes, sometimes just one individual, such as my husky. While these methods are quite useful, they can be supplemented by general causal reasoning in the form of experiments and studies, which is the subject of the next essay in this series.

My Siberian husky, Isis, joined the pack in 2004 at the age of one. It took her a little while to realize that my house was now her house. She set out to chew all that could be chewed, presumably as part of some sort of imperative of destruction. Eventually, she came to realize that she was chewing her stuff. More likely, joining me on 16-mile runs wore the chew out of her.

As the years went by, we both slowed down. Eventually, she could no longer run with me (despite my slower pace) and we went on slower adventures. One does not walk a husky; one adventures with a husky. Despite her advanced age, she remained active. After one adventure, she seemed slow and sore. She cried once in pain but then seemed to recover. Then she got worse, requiring a trip to the emergency veterinarian.

The  x-rays showed no serious damage, just an indication of the wear and tear of age. She also had some unusual test results, perhaps indicating cancer. Because of her age, the main concern was with her mobility and pain. If she could get about and be happy, then that was what mattered. She was prescribed medications, and a follow up appointment was scheduled with the regular vet. By then, she had gotten worse in some ways, and her right foot was “knuckling” over, making walking difficult. This is often a sign of nerve issues. She was prescribed steroids and had to go through a washout period before starting the new medicine. As might be imagined, neither of us got much sleep during this time.

For a while the steroids worked and she could go on slow adventures and enjoy basking in the sun while watching the birds and squirrels, willing the squirrels to fall from the tree and into her mouth.

While philosophy is often derided as useless, it was very helpful to me during this time and I decided to write about this usefulness as both a defense of philosophy and, perhaps, as something useful for others who face similar circumstances with an aging canine.

Isis’ emergency visit was focused on pain management and one drug she was prescribed was Carprofen (more infamously known by the name Rimadyl). Carprofen is an NSAID that is supposed to be safer for canines than those designed for humans (like aspirin) and is commonly used to manage arthritis in elderly dogs. Being curious and cautious, I researched all the medications. I ran across forums which included people’s sad and often angry stories about how Carprofen killed their pets. The typical story involved what one would expect: a dog was prescribed Carprofen and then died or was found to have cancer shortly thereafter. I found such stories worrisome and was concerned as I did not want my dog to be killed by her medicine. But I also knew that without medication, she would be in terrible pain and unable to move. I wanted to make the right choice for her and knew this would require making a rational decision.

My regular vet decided to go with the steroid option, one that also has the potential for side effects and there were horror stories on the web. Once again, it was a matter of choosing between the risks of medication and the consequences of doing without. In addition to my research into medication, I also investigated various other options for treating arthritis and pain in older dogs. She was already on glucosamine (which might not be beneficial, but seems to have no serious side effects), but the web poured forth an abundance of options ranging from acupuncture to herbal remedies. I even ran across the claim that copper bracelets could help pain in dogs. They cannot.

While some alternatives had been subject to scientific investigation, most discussions involved a mix of miracles and horror stories. One person might write glowingly about how an herbal product brought his dog back from death’s door while another might claim that the same product killed his dog. Sorting through all these claims, anecdotes and studies turned out to be a lot of work. Fortunately, I had numerous philosophical tools that helped, specifically of the sort where it is claimed that “I gave my dog X, then he got better (or died) and X was the cause.” Knowing about two common fallacies is very useful in these cases.

The first is what is known as Post Hoc Ergo Propter Hoc (“after this, therefore because of this”).  This fallacy has the following form:

 

Premise: A occurs before B.

Conclusion: Therefore, A is the cause of B.

 

This fallacy is committed when it is concluded that one event causes another just because the alleged cause occurred before the alleged effect. More formally, the fallacy involves concluding that A causes or caused B because A occurs before B and there is not sufficient evidence to warrant such a claim.

While cause does precede effect (at least in the normal flow of time), proper causal reasoning involves sorting out whether A occurring before B is just a matter of coincidence or not. In the case of medication involving an old dog, it could be a coincidence that the dog died or was diagnosed with cancer after the medicine was administered. That is, the dog might have died anyway or might have already had cancer. Without a proper investigation, simply assuming that the medication was the cause would be an error. The same holds true for beneficial effects. For example, a dog might go lame after a walk and then recover after being given an herbal supplement. While it would be tempting to attribute the recovery to the herbs, they might have had no effect at all. After all, lameness often goes away on its own or some other factor might have been the cause.

This is not to say that such stories should be rejected out of hand, but they should be approached with due consideration that the reasoning involved is post hoc. In concrete terms, if you are afraid to give your dog medicine she was prescribed because you heard of cases in which a dog had the medicine and then died, you should investigate more (such as talking to your vet) about whether there is a risk of death. As another example, if someone praises an herbal supplement because her dog perked up after taking it, then you should see if there is evidence for this claim beyond the post hoc situation.

Fortunately, there has been considerable research into medications and treatments that provide a basis for making a rational choice. When considering such data, it is important not to be lured into rejecting data by the seductive power of the Fallacy of Anecdotal Evidence.

This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy is also committed when someone rejects reasonable statistical data supporting a claim in favor of a single example or small number of examples that go against the claim. The fallacy is considered by some to be a variation on hasty generalization.  It has the following forms:

 

Form One

Premise: Anecdote A is told about a member (or small number of members) of Population P.

Conclusion: Claim C is made about Population P based on Anecdote A.

 

For example, a person might hear anecdotes about dogs that died after taking a prescribed medication and infer that the medicine is likely to kill dogs.

 

Form Two

Premise 1: Reasonable statistical evidence S exists for general claim C.

Premise 2:  Anecdote A is presented that is an exception to or goes against general claim C.

Conclusion: General claim C is rejected.

 

For example, statistical evidence shows that the evidence that glucosamine-chondroitin can treat arthritis is, at best, weak. But a person might tell a story about how their aging husky “was like a new dog” after she started taking it. supplement. To accept this as proof that the data is wrong would be to fall for this fallacy. That said, I did give my husky glucosamine-chondroitin because it is affordable, has no serious side effects and might have some benefit. I am fully aware of the data and do not reject it, I gambled that it might have done her some good.

The way to avoid becoming a victim of anecdotal evidence is to seek reliable, objective statistical data about the matter in question (a credible vet would be a good source). This can be a challenge when it comes to treatments for pets. In many cases, there are no adequate studies or trials that provide statistical data and only anecdotal evidence is available. One option is, of course, to investigate the anecdotes and try to do your own statistics. So, if most anecdotes indicate something harmful (or something beneficial) then this would be weak evidence for the claim. In any case, it is wise to approach anecdotes with due care  as a story is not proof.

The United States has settled into a post-shooting ritual. When a horrific shooting makes the news, many people offer some version of this prayer: “Oh God, let the shooter be one of them and not one of us.” Then people speculate about the identity of the shooter. In most cases the next step is that the Republicans offer thoughts and prayers while the Democratics talk about wanting to pass new gun control laws, if only they could win more elections. The final step is forgetting about that shooting when the next one occurs. My focus in this essay is on the speculation phase.

One of the most recent shootings is the attack on a Mormon church in Michigan which resulted in four people dying in the church and the attacker being killed by the police. As soon as the attack made the news, speculation began on the identity and motives of the shooter. Laura Loomer seemed to claim that the shooter was a Muslim acting as part of a broader plan while Donald J. Trump asserted that it appeared to be a targeted attack on Christians. And, of course, social media was awash with speculation. As this is being written, the suspect has been identified as 40-year-old Thomas Jacob Sanford. He is believed to be a military veteran and there is some evidence he held anti Mormon views. There is currently no evidence that he was Muslim. The investigation is ongoing, but the speculation continues.

In terms of why people speculate so quickly and without much (if any) evidence, there are various psychological reasons. I will leave those to the psychologists. There are also some practical reasons that connect to critical thinking, so I will briefly discuss those.

One practical reason to speculate immediately and even claim to know the identity and motives of the shooter is to generate clicks and hence income. One recent example of this is when 77 year old Michael Mallinson, a retired banker living in Toronto, was falsely claimed to be Charlie Kirk’s killer by an account pretending to be a Fox News outlet. Whoever was behind it also claimed he was a registered Democrat, which suggests they had some understanding of their targets. This example, and others like it, shows the importance of confirming a source as credible before accepting a claim. While one outlet might scoop a story, if it is credible, then other news outlets will run it as well and thus it is also wise to see if a claim is confirmed by other credible sources. There is, of course, the obvious problem that there has been a longstanding war against credible media outlets and that we are awash in misinformation and disinformation.  

While people can speculate in good faith (believing what they claim), there can be bad faith speculation intended to get an ideological narrative out there as soon as possible. This is because what is claimed first can often establish itself as plausible and then resist efforts to debunk it if it turns out to be false.

 Such false claims also provide others with “evidence” that they can use later when making their own false claims. For example, I regularly see people posting the false claim that many mass shooters are trans people, despite this being obviously untrue. As “evidence” people often post images of other posts making a false claim about a shooter’s identity. In some cases, people are acting in a form of good faith: they are being duped and wrongly think they are making true claims. For people who want to believe true things, a wise approach is to confirm whether a claim is true by seeking out multiple credible sources. But there is the obvious problem that people are often locked into ideological bubbles and what they see as credible sources are heavily biased or even dedicated spreaders of disinformation. There are also those who act in bad faith, posting claims about the identity and motives of shooters they know are false and using other untruths as “evidence” in order to advance their agenda, even if that is just to troll and trigger.

It is, of course, tempting to speculate about the identity and motives of shooters. While it might seem reasonable to draw inferences from such things as the target of the shooting, such speculation is still just speculation. For example, Trump speculated that the shooting might have been a targeted attack on Christians because the shooter attacked a church. As noted above, there is now some evidence that Trump was somewhat right: the attack might have been motivated by the shooters alleged dislike of Mormons. As this is being written, the religious beliefs of the shooter are unknown, but the United States does have a history of Christian Anti-Mormonism. When Mitt Romney was running for President, I (an Episcopalian) had to argue that Mormons are Christians. As such, any inferences about the shooter’s religious beliefs would be drawn from very thin evidence. The shooter could be a Christian who detested Mormons; but this is just speculation.

From a critical thinking and moral standpoint, the rational and ethical thing to do is to not speculate about a shooter’s identity and motives in public (such as posting on social media). Leave the investigation to the professionals and wait for adequate evidence to become available. This applies whether one is a pundit, a president or just a random person like me. People do, of course, have the right to speculate but rights should always be exercised with prudence and moral restraint.