The problem of the external world is a classic topic in epistemology (the theory of knowledge). This challenge, which was first presented by ancient skeptics, is met by proving that what I seem to be experiencing is real. As an example, it would require proving that the computer I seem to be typing this on exists outside of my mind.

Some early sceptics created the problem by noting that what seems real could be a dream. Descartes added a new element by considering that an evil demon might be causing him to have experiences of a world that does not exist. While the evil demon was said to be devoted to deception, little is said about its motive. After Descartes there was a move from supernatural to technological deceivers: the classic brain-in-a-vat scenarios that are precursors to the more recent notion of virtual reality. In these philosophical scenarios little is said about the motivation or purpose of the deceit, beyond the desire to epistemically mess with someone. Movies and TV shows sometimes explore motives of deceit. The Matrix trilogy, for example, presents a backstory for the Matrix. While considering the motivation behind the alleged deceit might not bear on the epistemic problem, it is an interesting subject.

One way to discern a possible motivation for the deceit is to consider the nature of the experienced world. As various philosophers, such as David Hume, have laid out in their formulations of the problem of evil (the challenge of reconciling God’s perfection with evil) the world is an awful place. As Hume has noted, it is infested with disease, suffused with suffering, and awash in annoying things. While there are some positive things, there is an overabundance of bad, thus indicating that whatever lies behind appearances is either not benign or not very competent. This, of course, assumes some purpose behind the deceit. But perhaps there is deceit without a deceiver and there is no malice. This would make the unreal like what atheists claim about the allegedly real: it is purposeless. However, deceit (like design) seems to suggest an intentional agent and this implies a purpose. This purpose, if there is one, must be consistent with the apparent awfulness of the world.

One approach is to follow Descartes and go with a malicious supernatural deceiver. This being might be acting from malice to inflict both deceit and suffering. Or it might be acting as an agent of punishment for my past transgressions. The supernatural hypothesis does have some problems, one being that it involves postulating a supernatural entity. Following Occam’s Razor, if I do not need to postulate a supernatural being, then I should not do so.

Another possibility is that I am in a technological unreal world. In terms of motives consistent with the nature of the world, there are numerous alternatives. One is punishment for some crime or transgression. A problem with this hypothesis is that I have no recollection of a crime or indication that I am serving a sentence. But it is easy to imagine a system of justice that does not inform prisoners of their crimes during the punishment and that someday I will awaken in the real world, having served my virtual time. It is also easy to imagine that this is merely a system of torment, not a system of punishment. There could be endless speculation about the motives behind such torment. For example, it could be an act of revenge or simple madness. Or even a complete accident. There could be other people here with me; but I have no way of solving the problem of other minds, no way of knowing if those I encounter are fellow prisoners or mere empty constructs. This ignorance does seem to ground a moral approach: since they could be fellow prisoners, I should treat them as such.

A second possibility is that the world is an experiment or simulation of an awful world, and I am a construct within this world. Perhaps those conducting it have no idea the inhabitants are suffering, perhaps they do not care. Or perhaps the suffering is the experiment. I might even be a researcher, trapped in my own experiment. Given how scientists in the allegedly real world have treated subjects, the idea that this is a simulation of suffering has considerable appeal.

A third possibility is that the world is a game or educational system of some sort. Perhaps I am playing a very lame game of Assessment & Income Tax; perhaps I am in a simulation learning to develop character in the face of an awful world; or perhaps I am just part of the game someone else is playing. All of these are consistent with how the world seems to be.

It is also worth considering the possibility of solipsism: that I am the only being that exists. It could be countered that if I were creating the world, it would be much better for me and far more awesome. After all, I write adventures for games and can imagine a far more enjoyable world. The easy and obvious counter is to point out that when I dream (or, more accurately have nightmares), I experience unpleasant things on a regular basis and have little control. Since my dreams presumably come from me and are often awful, it makes perfect sense that if the world came from me, it would be comparable in its awfulness. The waking world would be more vivid and consistent because I am awake, the dream world less so because of mental fatigue. In this case, I am my own demon.

The classic problem of the external world presents an epistemic challenge forged by the skeptics: how do I know that what I seem to be experiencing as the external world is really real for real? Early skeptics claimed that what seems real might be a dream. Descartes upgraded the problem through his evil demon which used its powers to befuddle its victim. As technology progressed, philosophers presented the brain-in-a-vat scenarios and then moved on to more impressive virtual reality scenarios. One recent variation on this problem was made famous by Elon Musk: we are characters in a video game. This is a variation of the idea that this apparent reality is just a simulation. There is a strong inductive argument for the claim that this is a virtual world.

One stock argument for the simulation world  uses the form of statistical syllogism. It is statistical because it deals with statistics. It is a syllogism by definition: it has two premises and one conclusion. Generically, a statistical syllogism looks like this:

 

Premise 1: X% of As are Bs.

Premise 2: This is an A.

Conclusion: This is a B.

 

The strength of this argument depends on the percentage of As that are B. The higher the percentage, the stronger the argument. This makes sense: the more As that are Bs, the more reasonable it is that a specific A is a B.  Now, to the simulation argument.

 

Premise 1: Most worlds are simulated worlds.

Premise 2: This is a world.

Conclusion: This is a simulated world.

 

While “most” is vague, the argument is such that if its premises are true, then the conclusion is more likely to be true than not. Before embracing your virtuality, it is worth considering a similar argument:

 

Premise 1: Most organisms are bacteria.

Premise 2: You are an organism.

Conclusion: You are a bacterium.

 

Like the previous argument, the truth of the premises makes the conclusion more likely to be true than false. However, you are not a bacterium. This does not show that the argument itself is flawed. The reasoning is good, and any randomly selected  organism would most likely be a bacterium. Rather, it indicates that when considering the truth of a conclusion, one must consider the total evidence. That is, information about the specific A must be considered when deciding whether it is a B. In the bacteria example, there are facts about you that would count against the claim that you are a bacterium, such as the fact that you are a multicellular organism.

Turning back to the simulation argument, the same consideration applies. If it is true that most worlds are simulations, then any random world is more likely to be a simulation than not. However, the claim that this specific world is a simulation would require consideration of the total evidence: what evidence is there that this world is a simulation? This reverses the usual challenge of proving that the world is real by requiring evidence it is not real. At this point, there is little evidence that this is a simulation. Using the usual fiction examples, we do not seem to find glitches that would be best explained as programming bugs, we do not seem to encounter outsiders from reality, and we do not run into some sort of exit system (like the Star Trek holodeck). That said, all this is still consistent with the world being a simulation: it might be well programmed, the outsider might never be spotted (or never go into the system) and there might be no way out. At this point, the most reasonable position is that the simulation claim is at best on par with the claim that the world is real since all evidence is consistent with both views. There is, however, still the matter of the truth of the premises in the simulation argument.

The second premise seems true, whatever this is, it seems to be a world. As such, the first premise is the key. While the logic of the argument is good, if the premise is not plausible then it is not a good argument overall.

The first premise is usually supported by a now standard argument. The reasoning includes the claims that the real universe contains large numbers of civilizations, that many of these civilizations are advanced and that enough of these advanced civilizations create incredibly complex simulations of worlds. Alternatively, it could be claimed that there are only a few (or just one) advanced civilizations but that they create vast numbers of complex simulated worlds.

The easy and obvious problem with this sort of reasoning is that it involves making claims about an external real world to try to prove that this world is not real. If this world is claimed to not be real, there is no reason to think that what seems true of this world (that we are developing simulations) would be true of the real world (that they developed super simulations, one of which is our world).  Drawing inferences from what we think is a simulation to a greater reality would be like the intelligent inhabitants of a Pac Man world trying to draw inferences from their game to our world.

There is also the fact that it is simpler to accept that this world is real rather than making claims about a real world beyond this one. After all, the simulation hypothesis requires accepting a real world on top of our simulated world. Why not just have this be the real world?

While I was required to take Epistemology in graduate school, I was not interested in the study of knowledge until I started teaching it. While remaining professionally neutral in the classroom, I now include a section on the ethics of belief in my epistemology class and discuss, in general terms, such things as tribal epistemology. Outside of the classroom I am free to discuss my own views on epistemology in the context of politics, and it is a fascinating subject. My younger self from graduate school would be surprised at the words “epistemology” and “fascinating” used together.

While COVID-19 was a nightmare for the world, the professed beliefs of Trump supporters about the pandemic provides an excellent case study in belief. As anyone familiar with these beliefs knows, they form a strange set of inconsistent and even contradictory claims. I am not claiming that every Trump supporter believes all these claims and I am not claiming that only Trump supporters believe them; but these are all claims professed by those who support Trump.

At the start of the pandemic Trump placed the blame on China and referred to the “the China virus.” His supporters generally accepted this view. The role of China varies depending on which explanation is offered. Some make the true claim that it originated in China. Others make the unsupported claim that it escaped (or was released intentionally) from a lab. On this view, the virus is generally presented as something bad. After all, it makes no sense to blame China unless the virus is a real problem.

There are also other conspiracy theories about the pandemic. One infamous theory is that the pandemic was real but caused by 5G. This would be inconsistent with the China virus theory; but one could preserve the China link by claiming that 5G technology is made in China

Trump also advanced the idea that the pandemic did not exist, that it was a hoax. This was echoed by his supporters—although some also advanced the theory that the Democrats infected Trump with the virus. The hoax idea was presented in various ways. For example, on some accounts the virus does exist but is no worse than the flu. This view led to an active anti-mask movement and death threats against public health experts. The anti-mask views make sense if one thinks the virus was a hoax but makes less sense if one thinks that the virus was bad enough to warrant making China pay. If it was a hoax perpetrated by the Democrats, then it makes no sense to hold China accountable. And if the virus did real damage and China should pay, then it makes no sense to claim it is a hoax. To be fair, these could be combined into the claim that China and the Democrats ran a worldwide hoax with the cooperation of all governments to harm Trump. Reconciling the 5G theory with the hoax theory would be challenging: if 5G was the cause of the pandemic, then it was not a hoax. And if it was a hoax, there was no pandemic for 5G to cause.

While Trump supporters profess to believe the pandemic was a hoax, over 80% of Republicans claimed to believe that Trump has done a great job with the pandemic.  His supporters claimed that he took rapid action (he did not) and that his response was very effective (it was not). Trump has also attempted to take credit for the forthcoming vaccines and has claimed, without evidence, that the FDA and Democrats stalled the vaccines. If the pandemic was a hoax, then it makes no sense to claim that Trump acted rapidly and effectively to counter the pandemic.  This is because there would be no pandemic to counter. It could be claimed that Trump acted to counter the hoax, but this would be hard to reconcile with Trump’s claims about the vaccine. If the pandemic was a hoax, then there was no need for a vaccine and taking credit for a useless vaccine would be silly. A Trump supporter could take the view that the pandemic was no worse than the flu and then credit Trump with addressing something no worse than the flu and developing the equivalent of a flu vaccine. But to the degree that Trump downplayed (lied about) the pandemic, this would undercut claims of how significant his alleged success should be considered.

As I noted earlier, I am not claiming that every Trump supporter believes all these claims. For example, the 5G pandemic theory was not universally embraced by Trump supporters (and is surely held by some who do not support him). However, Trump supporters generally seem to profess belief in many of these claims even though they are not consistent, and some would seem to lead to contradictions.

In logic, two claims are inconsistent when both could be false, but both cannot be true. To use my usual example, the claim that my water bottle contains only vodka and the claim that it contains only water are inconsistent with each other. If the bottle contains only vodka, then it does not contain only water and vice versa. But both could be false: the bottle could be empty. Or it could contain tequila. Many of the claims Trump supporters profess to believe about the pandemic seem inconsistent. For example, the claim that the pandemic was caused by 5G is not consistent with the claim that it is a hoax.

In logic, two claims contradict one another when one of them must be false and the other must be true. A contradiction is a claim that must be false and is false because of its logical structure. The stock example in logic is the conjunction P & -P. Since a conjunction is true when the two claims being conjoined are true and false otherwise, this claim is always false, at least on the assumption that any claim is true or false (but not both). So, if P is true, then -P must be false (and vice versa). Some of the claims Trump supporters profess to believe would seem to entail contradictory claims. For example, if it is claimed that the pandemic was caused by 5G, then this would entail that the pandemic is not a hoax which would contradict the claim that it is a hoax. Naturally, one could argue that the pandemic was caused by 5G and is also a hoax provided that the nature of the hoax is defined in a way that allows it to be caused by 5G.  As another example, the conspiracy theory that the pandemic was caused by a bioweapon released (intentionally or not) by China (or someone else) would entail that it was not a hoax. This would contradict the claim that it is a hoax. Again, one could try to craft the hoax claims in a way that the pandemic is both a hoax and caused by a bioweapon. Claiming that it is a hoax about a bioweapon would not do this, since a hoax about a bioweapon is not a bioweapon it is just a hoax.

From the standpoint of truth-functional logic (a logic in which the truth of a claim depends on the truth of the parts), the claims made by Trump supporters about the pandemic cannot all be true. In science-fiction, a robot or computer that attempted to accept all these claims would suffer some sci-fi logic failure, perhaps exploding. In reality, mapping out the logical relations between these claims would show that they cannot all be true and there would be no explosions (one hopes). But there is the interesting question of how people can hold to beliefs that cannot all be true and some of which lead to contradictions.

In philosophy, epistemologists (and others) often speak of beliefs as having intentionality. That is, beliefs have aboutness. When a person believes something about their world, they take their belief to correspond to reality. But while a belief has aboutness it need not be about reality. As an example, if Ted believes in unicorns, his belief is about unicorns (although philosophers disagree about beliefs about things that are not real) but not about real unicorns. Because there are no unicorns. People can also believe that all the claims in a set are true, even though it is not possible for them all to be true. That is, that set contains beliefs that are inconsistent with each other (or even contradictory). A person can even believe that a contradiction is true. Unlike truth functional logic, the truth of the claim “Person A believes claim C” does not depend on the truth of the parts; only on the truth of the claim about A believing C. A crude way to look at the matter is to see belief as like a Word file in which one can type any sentence rather than being like a computer program or circuit design that would fail if it contained logical inconsistencies or contradictions. So, saying that a person believes something is like saying it is in their Word file. Humans are clearly able to believe sets of inconsistent claims and even act on those beliefs which raises many interesting questions about belief formation and how belief impacts actions. As a closing point, people can certainly reconcile apparently inconsistent beliefs by not really believing in some or all of them professing that a claim is true when one believes it is not. That is, lying.

“I believe in God, and there are things that I believe that I know are crazy. I know they’re not true.”

Stephen Colbert

 

While Stephen Colbert ended up as a successful comedian, he originally planned to major in philosophy. His past occasionally returns to haunt him with digressions from comedy to philosophy. Detractors might claim that philosophy is comedy without humor; but that is law. Colbert has an odd epistemology: he regularly claims he believes in things he knows are not true, such as guardian angels. While it would be easy enough to dismiss this claim as purely comedic, it does raise interesting philosophical issues. The main and most obvious issue is whether a person can believe in something they know is not true.

While a thorough examination of this issue would require a deep examination of the concepts of belief, truth and knowledge, I will take a shortcut and go with intuitively plausible stock accounts of these concepts. To believe something is to hold that it is true. A belief is true, in the commonsense view, when it gets reality right. This is the often maligned correspondence theory of truth. A simple account of knowledge in philosophy is that a person knows that P when the person believes P, P is true, and the belief in P is properly justified. The justified true belief account of knowledge has been savagely blooded by countless attacks but shall suffice for this discussion.

Given this basic analysis, it would seem impossible for a person to believe in something they know is not true. This would require that the person believes something is true when they also believe it is false. To use the example of God, a person would need to believe that it is true that God exists and false that God exists. This would seem to commit the person to believing that a contradiction is true, which is problematic because a contradiction is always false.

One possible response is to point out that the human mind is not beholden to the rules of logic. While a contradiction cannot be true, there are many ways a person can hold contradictory beliefs. One possibility is that the person does not realize that the beliefs contradict one another and hence they can hold both.  This might be due to an ability to compartmentalize the beliefs, so they are never in the consciousness at the same time or due to a failure to recognize the contradiction. Another possibility is that the person does not grasp the notion of contradiction and hence does not realize that they cannot logically accept the truth of two beliefs that are contradictory.

While these responses do have considerable appeal, they do not appear to work in cases in which the person claims, as Colbert does, that they believe something they know is not true. After all, making this claim requires considering both beliefs in the same context and, if the claim of knowledge is taken seriously, that the person is aware that the rejection of the belief is justified sufficiently to qualify as knowledge. As such, when a person claims that they believe something they know is not true, then that person would seem to either not telling to truth or ignorant of what the words mean. Or perhaps there are other alternatives.

One possibility is to consider the power of cognitive dissonance management. A person could know that a cherished belief is not true, yet refuse to reject the belief while being fully aware that this is a problem.

Another possibility is to consider that the term “knowledge” is not being used in the strict philosophical sense of a justified true belief. Rather, it could be taken to refer to strongly believing that something is true, even when it is not. For example, a person might say “I know I turned off the stove” when, in fact, they did not. As another example, a person might say “I knew she loved me, but I was wrong.” What they mean is that they really believed she loved him, but that belief was false.

Using this weaker account of knowledge, then a person can believe in something that they know is not true. This just involves believing in something that one also strongly believes is not true. In some cases, this is quite rational. For example, when I roll a twenty-sided die when playing D&D, I strongly believe that a will not roll a 20. However, I do also believe that I will roll a 20 and my belief has a 5% chance of being true. As such, I can believe what I know is not true, assuming that this means that I can believe in something that I believe is less likely than another belief.

People are also strongly influenced by emotional and other factors that are not based on a rational assessment. For example, a gambler might know that their odds of winning are extremely low and thus know they will lose (that is, have a strongly supported belief that they will lose) yet also strongly believe they will win (that is, feel strongly about a weakly supported belief). Likewise, a person could accept that the weight of the evidence is against the existence of God and thus know that God does not exist (that is, have a strongly supported belief that God does not exist) while also believing strongly that God does exist (that is, having considerable faith that is not based in evidence).

As this is being written, the story of the stalled escalator is making international news. The gist of the tale is that an escalator at the United Nations building came to a sudden stop just as Trump and the First Lady began their journey upwards. The UN claims that a White House videographer accidentally tripped a safety system, stopping the mechanism. Aside from Trump and Melania getting in some unexpected cardio, nothing happened. While this event might seem utterly insignificant, it provides an excellent and absurd example of the state of American politics.

Some on the right rushed to present a narrative of a sinister plot against Trump, suggesting that it was a deliberate attempt to harm Trump or perhaps even set him up for an assassination attempt. While Trump initially seemed to laugh off the escalator incident, he is now calling for arrests in the wake of what some in the media are calling “escalatorgate.” Fox News personality Jesse Watters jokingly (one hopes) suggested blowing up or gassing the U.N. in retaliation. While all this might strike rational people as nonsense, it is philosophically interesting in terms of critical thinking, epistemology and ethics. In this essay I’ll briefly look at some of these aspects.

In causal explanations it is usually wisest to follow the popular conception of Occam’s Razor and go with the simplest explanation. In the case of the escalator, the simplest explanation is the stated one: someone tripped a safety mechanism. If someone intended to harm the President, rigging an escalator would be both needlessly complicated and extremely unlikely to cause any meaningful harm. Times being what they are, I am obligated to state unequivocally that I condemn any efforts to harm the President or anyone else with escalator sabotage. But there are reasons why someone might claim something sinister occurred and other reasons why someone might believe it. I make this distinction because people can obviously make claims they do not believe.

While there are various psychological reasons why the claim might be made, there are some “good” practical reasons to claim a sinister plot. One is to create a distraction that will take attention from other topics, such as economic woes and the Epstein files. Trump and his allies have turned this into an international story, and I have been drawn in to do my part. However, my point is that this should not be an important story. The second is to energize the base with an “example” of how “they” are out to get Trump. The third is that it provides a pretense for Trump to go after the U.N.. But why would anyone believe that there is something sinister going on?

We humans tend to attribute human motivations or intentions to objects or natural phenomena and this gives rise to what we philosophers call the anthropomorphic fallacy. While Trump and his supporters are not making this mistake about the escalator, they could be committing a similar error: they are inferring without adequate evidence that an accidental event was caused by sinister intentions. This “reasoning” involves rejecting the accident explanation in favor of the sinister intention explanation based on psychological factors rather than evidence. That is, Trump and his supporters probably feel that there is a sinister conspiracy against him, so accidents and coincidences are explained in terms of this conspiracy because the explanation feels right. And if the conspiracy theory is questioned, the questioner is accused of being in on the conspiracy. Other accidents and coincidences are also offered as “evidence” that this specific accident or coincidence is part of the conspiracy. It might be objected that people really have tried to hurt Trump, such as occurred with the two failed assassinations attempts (that I also condemn). While those do serve as evidence that those two people wanted to harm Trump, they have no relevance to the escalator incident and evidence in support of the escalator conspiracy in particular would be needed.

Another reason why some people might believe this is based in the claim about the right that “every accusation is a confession.” While there are various ways to explain this, a plausible one in some cases is the false consensus effect cognitive bias. This occurs when people assume that their personal qualities, characteristics, beliefs, and actions are relatively widespread through the general population. People who might themselves think of sabotaging an escalator to harm someone they dislike would be inclined to believe other people think like them, just as a liar would tend to think other people are also dishonest. Times being what they are, I must clarify that I condemn using escalators to harm people and I am not accusing anyone on the right of planning to do this. This is but a hypothesis about why some people might believe the elevator was sabotaged. Lastly, I’ll take a brief look at an ethical issue of free expression.

As noted above, Jesse Watters joked about bombing the U.N. in retaliation for the escalator. As I am a consistent advocate of free expression, I believe he has the moral right to say this although it would be morally acceptable for him to face any relevant proportional moral consequences. Times being what they are, I must be clear that I do not condone any attempts to harm Watters or even firing him over this. But his remarks are another example of the apparent moral inconsistency of the right, with Brian Kilmeade’s assertion that we should consider executing mentally ill homeless people being the most extreme example to date. Kilmeade had to apologize but faced no meaningful consequences.

After the brutal murder of Charlie Kirk, many on the right rushed to punish those who spoke ill of Kirk, with Watters himself calling for Matthew Dowd to be fired. There was also the suspension of Jimmy Kimmel after alleged intimidation by Trump’s FCC. Less famous people have also been fired, with Vice President Vance urging people to report criticism of Kirk to get these critics fired. This is but one of many examples showing that folks on the right either do not believe in free expression or define the right of free expression as only allowing what they want to express and hear. While this is moral inconsistency, it can be an effective strategy since it allows them the pretense of ethics without the inconvenience of being ethical.

 

During the last pandemic, Americans who chose to forgo vaccination were hard hit by COVID. In response, some self-medicated with ivermectin. While this drug is best known as a horse de-wormer, it is also used to treat humans for a variety of conditions and many medications are used to treat conditions they were not originally intended to treat. Viagra is a famous example of this. As such, the idea of re-purposing a medication is not itself foolish. But there are obvious problems with taking ivermectin to treat COVID. The most obvious one is that there is not a good reason to believe that the drug is effective; people would be better off seeking established treatment. Another problem is the matter of dosing as the drug can have serious side-effects even at the correct dosage. Since I am not a medical doctor, my main concern is not with the medical aspects of the drug, but with epistemology. That is, I am interested in why people believed they should take the drug when there is credible evidence it would work. Though the analysis will focus on ivermectin, the same mechanisms work broadly in belief formation.

Those who were most likely to use the drug were people in areas hit hard by COVID and subject to anti-vaccine and anti-mask messages from politicians and pundits. These two factors are related: when people do not get vaccinated and do not take precautions against infection, then they are more likely to get infected. This is why there was such a clear correlation between COVID infection rates and the level of Trump support in an area. Republican political thought embraces authoritarianism and rejects of expertise. Conservatives also want to “own the libs” by rejecting their beliefs and making liberals mad. Many liberals wanted people to get vaccinated and wear masks, so “owning the libs” put a person at greater risk for COVID. Once a person got infected, they needed treatment. But why did they chose ivermectin over proven methods? This seems to be the result of how the right’s base forms their beliefs.

The right’s base seems especially vulnerable to grifters and thus inclined to believe what grifters tell them. This is not because they are less intelligent or less capable than liberals; rather it seems to result from two main factors. The first is that the American right tends to be more authoritarian and thus more inclined to believe when an authority figure tells them to believe. The second is that the American right has long waged war on critical thinking and expertise. Hence people on the right are less inclined to use critical thinking tools effectively in certain contexts and are likely to dismiss experts who they do not regard as trusted authority figures.

While ivermectin was studied scientifically, there is currently no evidence that it can effectively treat COVID. But a small and growing industry arose for providing people with unproven or discredited treatments for COVID. While some might be well-intentioned, much of it is grifting at the expense of those who have been systematically misled. As such, people believe ivermectin can help them because authority figures have told them they should believe it. But, of course, there is the question of why ivermectin was chosen.

One likely reason is that ivermectin has been shown to impede the replication of the virus. Someone who is misled by wishful thinking would probably not consider the matter further; but it is important to note that this test was conducted in the laboratory using high concentrations of the drug that probably exceeded what a human could safely use. To use an analogy, this is like saying that fire is effective in killing the virus. While this is true, it does not make it an effective treatment in humans. As such, there is a bit of truth to the claim that ivermectin can affect the virus. For some reason, certain people seem to consistently reason poorly in such contexts; I am inclined to chalk this up to wishful thinking.

There is also the fact that a single, unpublished paper influenced some countries to include the drug in their treatment guidelines. However, this paper was never published because the method used to gather the data is both irregular and unreliable. The company that gathered the data, Surgisphere, is already notorious for its role in scandals involving hydroxychloroquine studies. People seem to tend to believe the first thing they hear about something, especially if they want it to be true, hence this discredited paper held considerable influence. This is like the case in which those who think vaccines are linked to autism still believe in a long discredited study by a discredited doctor.

One might attempt to respond to this by arguing that there are other papers showing the effectiveness of ivermectin. While this would be a reasonable response if these papers were based on good data, they are not. As has been shown, they suffer from serious errors. But, once again, this does not seem to matter. People such as Preston Smiles, Sidney Powell and Joe Rogan promoted the drug and, of course, Fox News personalities praised it. It was hydroxychloroquine 2.0. This takes us back to the appeal to authoritarianism fallacy: people believed because authority figures told them to believe. There is also a fallacious appeal to authority in effect. For example, Joe Rogan is a talk show host and not a doctor; yet people believe him because he is a celebrity.

People might also be motivated to accept the “evidence” of bad data and poor methods because doing so can feel rebellious. By rejecting the methodology of the experts, they can see themselves as making up their own minds…by accepting what politicians and celebrities tell them. There might also haven been a conspiracy theory element at work as well; the idea that “they” do not want people to know about ivermectin (or whatever) and hence they want to believe it works.

Ivermectin became another front in the culture war. It must be said that the left contributed to the fight by mocking those who used the drug. But when it became a political battle, the base doubled down and defended it, despite a lack of evidence. That is, they professed to believe because doing so is the stance of their tribe.

There were efforts to conduct clinical trials of the drug, but these were bizarrely been met with hostility and threats from ivermectin proponents. On the positive side, there will be some data available from the people self-medicating. Unfortunately, it will not be very good data because it will mostly be a collection of self-reported anecdotes. Once again, the culture war of the right hurt people. Although, as always, some profited.

From the standpoint of reliably forming true beliefs, this approach is the opposite of what a person should take.  Believing medical claims based on political authorities, grifters and celebrities is not a reliable way to have true beliefs. Accepting flawed studies as evidence is, by definition, a bad idea from the standpoint of believing true things. But these belief forming mechanisms do have advantages.

Politicians, celebrities, and grifters obviously benefit from their base forming beliefs this way. Those who form the beliefs also get something out of it; they can feel the pleasure of expressing their loyalty, the reassurance of wishful thinking, the warm glow of unity with their tribe, and the hot fire of angering the other tribe. And in the end, isn’t that all that really matters to some people?

As a runner, I have often imagined what it would be like to have super speed like the Flash or Quicksilver. Unfortunately for my super speed dreams, Kyle Hill has presented the fatal flaws of super speed. But while Hill did consider the problem of perception, he seems to have missed one practical problem with being a super speedster and that is how mentally exhausting (and boring) running a super speed could be. Kant can help explain this problem.

Our good dead friend Kant argued that time is not a thing that exists in the world, rather it is a form in which objects appear to us. It is for him, the “form of inner sense” because our mental events must occur in temporal sequence. Or, rather, must occur to us in that way. He does bring up a very interesting point, namely that other beings could experience time differently than humans. For example, God might experience all time simultaneously.  If God does this, it can account for both omniscience and free will: God knows what you will do because from his perspective you done did it, are doing it, and will do it. Other beings might have a similar inner sense, but with a different perceived speed. This takes us to speedsters.

While humans can operate fast moving vehicles like jets and rockets using our merely human perceptions, a super speedster would need to perceive the world and make decisions at super speed. Consider a simple comparison. With adequate training, I could pilot a plane going 500 mph. But imagine that I could run 500 mph, but my brain operated normally. If I tried to run a winding trail in the woods, for example, I would slam into trees because my running speed would vastly exceed my ability to perceive the trail and decide when to turn. But if my mental processes were also fast, then I would be able to run “normally” on the trail: from my perspective, I would have plenty of time to make decisions and avoid collisions. My “form of inner sense” would match up with my movement speed, so I would be fine. Mostly. But there would be a problem if I wanted to use my super speed to save on travel expenses.

Suppose I wanted to visit my family in Maine. My sister’s house is about 1500 miles from my house in Florida. If I could run 500 mph, I could be there in three hours. Being an experienced marathoner, I know that running for three hours is no big deal for me and it would be well worth it to save the cost and annoyance of flying. But travelling in this way would be more complicated than just running for three hours. For people watching me and by my watch, it would be three hours of running. But remember, my mind would be significantly sped up to enable it to handle my physical speed.

To keep the math simple, suppose my normal human running speed is 10 mph. So, my super speed would be fifty times that (500 mph). Suppose that my perception and decision-making speed was equally increased. While this might seem amazing, it would entail that from my perspective the three-hour run would take 150 hours (6.25 days). Even ignoring concerns about sleep and endurance, that would be an extremely unpleasant run. After all, I would experience it as if I were running there at normal human speed (although other people and things would seem to be moving very slowly). For me, it would not be worth it to spend 150 (mental) hours running even if it saved me the price of a plane ticket. After all, I could do that now—and I do not.

One could, of course, tweak the numbers a bit. Perhaps I could safely run at 500 mph while my mind operated at slower than 50 times normal speed. But it would still need to operate much faster than normal, otherwise I would keep running into things and doing a lot of damage. So, super speed would generally not be great for long distance travel.

One could, of course, do some comic book stuff and come up with workarounds to avoid the boredom problem. Perhaps a speedster would have multiple levels of awareness—a fast navigating subconscious awareness that guides them safely and a slower conscious mind to avoid the boredom. Going back to Kant, this would involve having two different forms of inner sense operating in the same mind, which is obviously not even very weird in philosophical terms. In that case, super speed would be a great way to travel.

In epistemology, the problem of other minds is the challenge of proving that other beings have thoughts and feelings analogous to my own. A practical version of the problem is how to tell when someone is honest: how do I know their words match what they believe? But the version I am concerned with here is the problem of racist minds. That is, how do I know when someone is a racist? Racism, like dishonesty, comes in degrees. Just as everyone is a bit dishonest, everyone is a bit racist. But a person should not be labeled a liar unless they are significantly dishonest. The same applies to being a racist and a person should not be labeled as a racist unless their racism is significant. There is, of course, no exact boundary line defining when a person should be considered a liar or a racist. Fortunately, we can get by with imprecise standards and accept the existence of grey areas. To demand a precise line would, of course, fall for the line drawing fallacy.

It is important to distinguish racists from people who seem racist. One reason is that an accusation of racism can have serious consequences, and such claims should not be made lightly. Another reason is that racists should be exposed for what they are. What is needed are reliable tests for sorting out racists from non-racists.

The need for a test also arises in the classic problem of other minds. Descartes proposed a language-based test to solve the problem in the context of animals. Roughly put, if something uses true language, then it has a mind and thinks. Turing created his own variation on this test, one that is more famous than Descartes’ test. In the case of testing for racism, it is assumed that people have minds and that problem is bypassed (or ignored) for practical reasons.

It might be wondered why tests are needed. After all, many assume the only true racists are the blatant racists: they burn crosses, have Swastika tattoos, and openly use racist language. While these racists are easy to spot, there are more subtle racists who work at avoiding detection. In fact, coded racism has been a strategy in the United States for decades, most famously explained by Lee Atwater:

 

You start out in 1954 by saying, “Nigger, nigger, nigger.” By 1968 you can’t say “nigger”—that hurts you, backfires. So you say stuff like, uh, forced busing, states’ rights, and all that stuff, and you’re getting so abstract. Now, you’re talking about cutting taxes, and all these things you’re talking about are totally economic things and a byproduct of them is, blacks get hurt worse than whites.… “We want to cut this,” is much more abstract than even the busing thing, uh, and a hell of a lot more abstract than “Nigger, nigger.”

 

This illustrates the challenge of determining whether a person is racist: there are coded words and phrases used by racists that are not openly racist in their normal meaning, and they have many uses. First, they allow a racist plausible deniability: they can claim to be using the word or phrase in a non-racist manner. Second, it allows racists to recruit non-racists. People who are, for example, concerned about welfare fraud can be drawn into racism through that gateway. Third, it allows racists to signal each other while making the “normies” think critics are crazy. As an illustration, when I have tried to explain various code phrases used by racists to “normies” they often think I am either making it up or I accept a wacky woke conspiracy theory. So how does one pierce the veil and solve the problem of racist minds? Here are two useful guides.

As noted above, there code words and phrases used by racists that have non-racist surface meanings. One example is the use of “China virus” by Trump and his fellows during the last pandemic. On the face of it, this seems non-racist: they are referencing where the virus comes from. As I have argued in earlier essays, this use of “China virus” is racist. It makes use of the well-worn racist trope of foreigners bringing disease and Trump’s followers got the message: anti-Asian violence increased dramatically. But one might say, surely there are many people who use such words and phrases without racist intent. That is true and is what gives the racists cover and an opportunity for plausible denial. If only racists used a phrase or word, it would be dead giveaway.

So how does one know when a person is using such words and phrases in a racist manner and when they are not? One easy test is to see how they react to being informed of the racist connotation of the word or phrase. For example, if someone uses “China virus”, then one can inform them it has racist implications and is used by racists. If the person persists in using it despite being aware of its implications, then it is reasonable to conclude they are being racist. It might be objected that a non-racist might want to persist in using the term to “own the libs” or because they refuse to be “politically correct.” While this has some appeal, it can also be a strategy for concealing racism. It is, after all, reasonable to infer that a person who is dedicated to “owning the libs” in this manner is a racist.

To use an analogy, imagine someone who likes setting off fireworks in their backyard. They learn their neighbor has PTSD because they lost an arm, an eye, and friends to IEDs in Iraq and the fireworks really bother her. If they persist in setting of the fireworks despite this knowledge, it would be reasonable to believe they are an ass. After all, a decent person would not do that, even if they believed they had the right to do so. Likewise, a person who persists in using words and phrases that are racist code in contexts where the code is racist would provide evidence they are a racist. Or an ass.

 As the Atwater quote also notes, racism is often coded into policies and their justifications.  Migration provides a good example of this sort of coding. Only the most blatant racists would openly say that they want to keep non-whites out of the United States because of white supremacy. As such, racists have adopted the approach of arguing for restrictions that focus on non-whites using justifications that are not openly racist. The stock reasons given are that migrants are coming here to commit crimes, steal jobs, steal social services and that migrants are bringing diseases.

On the face of it, these are not racist reasons: the arguments for restricting immigration use economic and safety concerns. It just happens that these restrictions target non-white migrants. So how does one distinguish between racists and non-racists who advance such arguments? After all, racists have worked hard to recruit non-racists into using their arguments and they can have considerable appeal. A sensible person would, after all, be concerned if migrants were committing crimes, stealing jobs, and spreading disease.

In most cases where the racists advance coded arguments, they are also making untrue or misleading claims.  This allows for an effective test. Using the migration example, the claims that migrants are stealing jobs, committing crimes and so on are either false or presented in a misleading manner.

If a person is a non-racist and supports, for example, restrictions on migration because they believe these claims, then proving that these claims are false would change their mind. So, if Sally supports restrictions on migration because of her concerns that migrants are doing all those terrible things she is told they do but she learns that these claims are not true or greatly exaggerated, then her position should change. If Sally is a racist, then these are not her real reasons—so she will not change her mind and will persist in lying and exaggerating. As such, a good general test is to find cases where a person claims to believe something that is coded racism and not supported by evidence. If the person is not a racist, they should be amendable to changing their views when the reasons they profess for accepting their views are disproven.

It can be countered that people can become very invested in beliefs and double-down in the face of disproof. Might there not be cases in which a non-racist simply refuses to accept disproof about, for example, claims about migrants? This is certainly possible, but one must wonder why they would be so committed to holding to a disproven view. It makes sense for a racist to do this since their belief is based on racism. But a non-racist would be irrational to do this; although it must be admitted that people are often irrational. As such, the test would not be able to reliably distinguish between racists and people with an irrational commitment to such views.

But, going back to the fireworks analogy, this would seem to be like a person who insists they are not an ass, they just refuse to believe that their neighbor is bothered by the fireworks despite all the overwhelming evidence. This is logically possible, but the better explanation would be that they are, in fact, an ass.

In epistemology the problem of the external world is the challenge of proving that I know that entities exist other than me. Even if it is assumed that there is an external world, there remains the problem of other minds: the challenge of proving that I can know that there is at least one other being that has a mind. A common version of this problem tends to assume other beings exist, and the challenge is to prove that I can know that these other beings have (or lack) minds. Our good dead friend Descartes offered the best-known effort to solve the problem of the external world and in trying to solve this problem he also, perhaps unintentionally, attempted to solve the problem of other minds.

In his Meditations Descartes set out to create an infallible foundation of knowledge starting with his method of doubting his beliefs until he found a belief he could not doubt.  As part of this project, he hoped to solve the problem of the external world. After his doubting spree in the first Meditation, he took his belief that he thinks and the belief that he exists to both be certain and indubitable.  In trying to prove that something exists other than him, Descartes attempts to prove that God exists. And so, he attempted to solve the problem of the external world by solving a version of the problem of other minds. Proving that God exists would prove that another mind exists and that something exists other than him, thus solving a limited version of each problem.

While Descartes grinds through a plethora of proofs, his key reasoning for the purposes of this essay is his notion that the cause of a belief must contain as much reality as the belief itself. Roughly put, you can think of this reasoning as analogous to inferring that whatever charged a mobile phone battery must have at least as much power as in in the battery (assuming the battery charged from zero). Descartes based this reasoning on the principle that something cannot arise from nothing.

Roughly put, Descartes claimed that his idea of God is such that he could not be the cause of this idea—it had to be caused by something external to him. For example, Descartes notes that God is perfect and argues that he could not get the idea of a perfect being from his imperfect self. As another example, Descartes claims that God is infinite and that he (Descartes) could not create the idea of infinity from himself. From this he infers that God exists. He then goes on to argue that since God is perfect it follows that God is not a deceiver. Descartes then concludes that since God created us, we can generally trust our senses and thus we can infer that there is an external world. While this does not address the common version of the problem of other minds, it does offer a solution to the narrowest version: it does attempt to show that Descartes is not the only mind. In philosophical terms, success would refute solipsism. That is the philosophical view that I am the only being in existence. Or, alternatively, it could be taken as the view that I am the only mind (thinking being) that exists. While I think that Descartes’ efforts failed, his attempted solution to the problem of other minds provides a model that I will steal. My goal is modest: I am not trying to prove that other people have minds, I am just endeavoring to show that there is at least one other mind. I will do this with an aesthetic argument that was inspired by the combination of watching Wandavision and teaching Epistemology via Zoom during the last pandemic.

While teaching my Epistemology class at squares on Zoom I mentioned Wandavision as an example and had the realization that the quality of the show could be used to argue for the existence of other minds. While Descartes argued that the cause of an idea must equal or exceed the reality of the idea, I will replace this with the principle that the cause of an idea must equal or exceed the quality of the idea in terms of creativity. As such, to show that something exists other than me, I just need to find an idea (or ideas) whose content exceeds my own creativity. That is, I need to find ideas that I could not create. This is extremely easy.

Wandavision, the show that inspired this argument, exceeds my creative abilities as I lack the skill and talent to write such a series. I can obviously say the same about many other movies, books, and stories since they are beyond my skill to create. As a writer, I am aware of the limits of my abilities and can safely draw these conclusions. I can also add other art, such as music and drawing. I know my skills at music (none) and drawing (very limited) and know that I lack the ability to create most works I have heard or seen. Since I could not create such works, there must be at least one other mind that is creating them. There might be only one other mind and it could be electing to create works of varying degrees. Or there might be many other minds creating these works. This does not, of course, show that there is an external world of the sort I think exists. It could be (as Descartes considered) just me and one other being who is causing all these ideas in my mind. While I would condemn the deception, I must thank them for the high-quality work they create for me. As such, I can infer that at least one other mind exists and that I am not alone. But there are, as always, counter arguments.

One obvious counter argument is that I have an unknown talent or skill that allows me to create without being aware I am doing so. That is, I cannot consciously create things of such quality but can somehow do so without being aware I am doing it. One could point to dreams as an obvious example of how this might work: the best explanation for dreams is that their content comes from me although I (usually) lack conscious control. My reply is to point out that my dreams do not match the quality of the works I encounter in the waking world in terms of the stories. Any art I see or music I hear in my dreams are always mere copies from when I am awake (or think I am awake) or of low quality. As such, I would seem to lack such a hidden and unknown faculty of creativity. I do agree it is not a logical impossibility that I have such a faculty, but there is no evidence for its existing beyond explaining my aesthetic experiences without any other mind existing, which would be ad hoc.

A second obvious counter is to allow that something exists other than me, but this is not another mind. That is, the aesthetic experiences are created in a “mechanical” way without the sort of thinking that would be done by a mind. To use an analogy, this would be like having an AI creating content without having a mind. There are two responses to this. The first is that this would still entail that I was not the only being in existence as there would also be this creator entity. The second is that such a high degree of creativity would seem to require a mind. It would pass tests analogous to the Turing test and thus it would be reasonable to infer there is at least one other mind behind these creative works.

In closing, there are two main possibilities. The first is that I alone exist, and I have an unknown faculty of creativity that vastly exceeds my known skills and talents (and I can never consciously use these hidden abilities). The second is that at least one other being exists and is creating these works that are beyond my skill.

Trump is infamous for spewing lies and his supporters are known for believing his claims. As noted in previous essays, one of the many things that is striking about supporters professing belief in Trump’s claims is that they accept claims that are logically inconsistent or even contradictory. Two claims are inconsistent when they both cannot be true but they both could be false. This is different from two claims being contradictory: if one claims contradicts another, one must be true and the other false.

The last pandemic provides a horrific example of the ability of Trump supporters to profess belief in inconsistent claims.  Many Trump supporters claimed to believe that COVID-19 was a hoax, that it was no worse than the flu, that it was a Chinese bioweapon, that Trump did a great job with the pandemic and that Trump should get credit for the vaccine.   When Bob Woodward released tapes proving that Trump acknowledged the danger of the virus in February, many Trump supporters accepted Trump’s claim that he wanted to play down the virus to avoid a panic. His supporters defended him, claiming that great leaders have and should lie to prevent panic in the face of terrible danger. If Trump was right to lie to play down the deadly danger of the virus, then this is inconsistent with the claim that it is like the flu and inconsistent with the claim that it is a hoax. If he was right to lie because of the danger, then it is not like the flu nor is it a hoax. But if it is like the flu or a hoax, then he would not need to lie about the danger. One way to explain Trump supporters professing inconsistent beliefs is that some of them are accomplices. Another is that they are victims. I will begin with the accomplice explanation.

It is possible, even likely, that some of Trump’s supporters are aware when he is lying and perhaps even recognize when they make inconsistent claims. In this case, the inconsistency can easily be explained: they are accomplices to his lies and are repeating them. There is no inconsistency in their beliefs because they do not believe what they are claiming. There are various reasons for people to serve as his accomplices. They might want to express their allegiance to him, they might find his lies advantageous in their own grifts, they might be trolls, or they might gain some other advantage by professing belief in his lies. Not believing inconsistent claims does not make the claims consistent; it is just that the accomplices do not have inconsistent beliefs in this context.

As would be suspected, it can be difficult to prove that a supporter is an accomplice of Trump rather than a victim. While Trump pulls the curtain back and reveals things (like how Republicans want to make it harder to vote), it is unlikely that one of his accomplices would end a social media post professing belief in Trump’s claims by revealing that they do not believe the lies they just professed to believe. Sorting out the accomplices from the victims would require access to such things as private emails and recordings, things that would be difficult and perhaps illegal to acquire. In general, the accomplices are not very interesting from an epistemic standpoint since they are lying. About the only thing interesting is the epistemic problem of discerning the accomplices from the victims. Now, on to the victims.

In this context, the victims of Trump are supporters who believe his lies. These victims can be further divided into those who would change their view of Trump if they realized he was lying and those who would still support him (that is, would become accomplices). Given that Trump lies badly and blatantly even when his lies are easily exposed, my main explanation as to why these victims believe him is that they are often basing their beliefs on an appeal to authoritarian. This fallacious reasoning has the following form:

 

Premise 1: Authoritarian leader L makes claim c.

Conclusion: Claim C is true.

 

The fact that an authoritarian leader makes a claim does not provide evidence or a logical reason that supports the claim. It also does not disprove the claim because accepting or rejecting a claim because it comes from an authoritarian would both be errors. The authoritarian could be right about the claim but, as with any fallacy, the error lies in the reasoning.

A silly math example illustrates why this is bad logic:

 

Premise 1: The dear leader claims that 2+2 =7.

Conclusion: The dear leader is right.

 

Since this is bad logic, it gets its power from psychological rather than logical factors. In this case, these factors are the psychological features of authoritarian personalities. An authoritarian leader is characterized by the belief that they have a special status as a leader. At the extreme, the authoritarian leader believes that they are the voice of their followers and that they alone can lead. Or, as Trump put it, “I alone can fix it.” Underlying this is the (false) belief that they possess exceptional skills, knowledge and ability. This causes them to make false claims and mistakes.

Since the authoritarian leader is reluctant to admit errors and limits, they must be dishonest to the degree they are not delusional and delusional to the degree they are not dishonest. Trump exemplifies this with his constant barrage of untruths and incessant bragging. These claims are embraced as true by his supporters who are victims.

An authoritarian leader like Trump desires followers and fortunately for him, there are those of the authoritarian follower type. While Trump’s accomplices make use of him and assist him, they know he is lying. The authoritarian follower believes that their leader is special, that the leader alone can fix things. Thus, the followers must buy into the leaders’ delusions and lies, convincing themselves despite the evidence to the contrary. Trump’s devoted supporters incorrectly believe him to be honest and competent.

Since Trump has failed often and catastrophically, his victims must accept the deceitful explanations put forth to account for them. This requires rejecting facts and logic.  These victims embrace lies and conspiracy theories—whatever supports the narrative of Trump’s greatness and success Those who do not agree with Trump are not merely wrong but are enemies.  The claims of those who disagree are rejected out of hand, and often with hostility and insults. Thus, the followers tend to isolate themselves epistemically—which is a fancy way of saying that nothing that goes against their view of the leader ever gets in. While this explains, in part, their belief in Trump’s lies it also helps explain how they can believe inconsistent (even contradictory) claims.

Someone who forms beliefs based on the appeal to authoritarian will accept what the authoritarian tells them as true. What justifies these beliefs in the minds of the victims is that the authoritarian made them. As such, they have no reason to consider other evidence and are effectively immune to arguments against these beliefs. After all, if the justification of a belief is a matter of it being a claim made by the authoritarian, then any other evidence or argument against that claim cannot impact its justification. The only things that could undermine the belief would be if the authoritarian told their followers to accept a new belief in place of the old (for example, the authoritarian saying that a once trusted minion is now an enemy) or if the victim stopped accepting the authoritarian for some reason.  So how does this enable inconsistent beliefs?

The answer is that it does so very easily. If the victim believes a claim because the authoritarian makes the claim and other factors are irrelevant, then consistency will not matter to that victim. These beliefs are not accepted because they are backed by evidence, and they are not subject to critical assessment. As such, it would not even occur to the victim to check the claims made by the authoritarian against each other to see if they are consistent or not: these claims are simply believed, and they are believed because the authoritarian makes them. In the case of Trump supporters who are victims, this seems to be what they are doing: they believe what Trump says because Trump says it and that is good enough. It must be; if they engaged in a honest assessment and searched for the truth, they would not believe Trump’s lies. While they might bring up “evidence” and “argue” when responding to critics of Trump, these are not good faith efforts since they do not believe based on evidence (because there is none) and they will refuse all evidence and arguments that go against these beliefs. Trump’s victims believing his lies about the election and insisting there is evidence of widespread fraud is an excellent example of this. The lack of evidence has no impact on their beliefs nor does the inconsistency of some of their beliefs because all that matters is what Trump says. This, of course, is a terrible epistemic system, although it is the foundation of authoritarianism (which is what Trumpism is, at least in part).