The assassination of Iranian scientist Mohsen Fakhrizadeh might have been conducted by a remote-controlled weapon. While this was still a conventional assassination, it does raise the specter of autonomous assassination automatons or assassin bots. In this context, an assassin bot can conduct its mission autonomously once deployed. Simple machines of this kind already exist. Even a simple land mine can be considered an autonomous assassination device because once deployed it activates according to its triggering mechanism. But when one thinks of proper assassin bot, one thinks of a far more complicated machine that can seek and kill its target in a sophisticated manner. Also, it could be argued that a mine is not an assassination machine. While it can be placed in the hopes of killing a specific person, they do not seek a specific human target. As such, a proper assassin bot would need to be able to identify their target and attempt to kill them. To the degree that the bot can handle this process without human intervention it would be autonomous. 

The idea of assassin bots roaming about killing people raises obvious moral concerns. While the technology would be new, there would be no new moral problems here, with one possible exception. The ethics of assassination involve questions about whether assassination is morally acceptable and debates over specific targets, motivations, and consequences. But unless the means of assassination is especially horrific or indiscriminate the means are not of special moral concern. What matters morally is that some means is used to kill, be those means a punch, a poniard, a pistol, or poison. To illustrate, it would be odd to say that killing Mohsen Fakhrizadeh with a pistol would be acceptable but killing him as quickly and painfully with a knife would be wrong. Again, methods can matter in terms of being worse or better ways to kill, but the ethics of whether it is acceptable to assassinate a person are distinct from the ethics of what means are acceptable. Because of this the use of assassin bots would be covered by established ethics an if assassination is wrong, then using robots would not change this. If assassination can be morally acceptable, then the use of robots would also not change this.  Unless the robots killed in horrific or indiscriminate ways.

There seem to be two general ways to look at using assassin bots to replace human assassins. The first is that their use would remove the human assassin from the equation. To illustrate, a robot might be sent to poison a dissident rather than sending a human. As such, the moral accountability of the assassin would be absent, although the moral blame or praise would remain for the rest of the chain of assassination. Whether, for example, Vlad sent a human or a robot to poison a dissident Vlad would be acting the same from a moral standpoint.

The second is that the assassin bot does not remove the assassin from the moral equation, but it does change how the assassin does the killing. To use an analogy, if an assassin kills targets with their hands, then they are directly engaged in the assassination without the intermediary of a weapon. If an assassin uses a sniper rifle and kills the target from hundreds of yards away, they are still the assassin as they directed the bullet to the target. If the assassin sends an assassin bot to do the killing, then they have directed the weapon to the target and are the assassin. Unless the assassin bot is a moral agent and can be accountable in ways that a human can be, and a sniper rifle cannot. Either way, the basic ethics do not change. But what if humans are removed from the loop?

Imagine, if you will, algorithms of assassination encoded into an autonomous AI. This AI uses machine learning or whatever is currently in vogue to develop its own algorithms to select targets, plan their assassinations and deploy autonomous assassin bots. That is, once humans set up the system and give it basic goals the system operates on its own.

The easy and obvious moral assessment is that the people who set up the system would be accountable for what it does. Going back to the land mines, this system would be analogous to a very complicated land mine. While it would not be directly activated by a human, the humans involved in planning how to use it and in placing it would be accountable for the death and suffering it causes. Saying that the mine went off when it was triggered would not get them off the moral hook as the mine has no agency. Likewise, for the assassination AI because it would trigger based on its operating parameters, but humans would be accountable for what it does to the degree they were involved. Saying they are not responsible would be like the officer who ordered land mines placed on a road claiming that they are not accountable for the deaths of the civilians killed by those mines. While it could be argued that the accountability is different than that which would arise from killing the civilians in person with a gun or knife, it would be difficult to absolve the officer of moral responsibility. Likewise, for those involved in creating the assassin AI.

If the assassin AI developed moral agency, then this would have an impact on the matter because it would be an active agent and not merely a tool. That is, it would change from being like a mine to being like the humans in charge of deciding when and where to use mines. Current ethics can, of course, handle this situation: the AI would be good or bad in the same way a human would be in the same situation. Likewise, if the assassin bots had moral agency they would be analogous to human assassins.

Trump and his supporters claimed Biden “won” in 2020 because of widespread election fraud. While Sidney Powell wove an international conspiracy too crazy even for Rudy Giuliana, some of Trump supporters embraced it. Another conspiracy theory claimed, falsely, that the US seized election servers in Germany in an armed raid. The pardoned (by Trump) criminal Michael Flynn  called on Trump to suspend the Constitution and impose martial law in order to re-do the election. Officials in Georgia  received death threats for accepting the election results and when a fellow Republican pleaded with Trump to address this, Trump doubled down on his conspiracy theory.

The various conspiracy theories seem to have claimed that all election officials in areas won by Biden were involved in the alleged fraud. It must be noted that these included Republican election officials who supervised elections in which down-ballot Republicans often won. As always, the entire mainstream media (except perhaps Fox News) was said to be in on the conspiracy against Trump. Social media companies, voting machine companies and fellow travelers have been accused of being in on the conspiracy.  Even the Secretary of State and the Governor of Georgia seem were cast into the conspiracy by Trump and his followers thought they betrayed Trump for Biden. Attorney General Bill Barr disputed Trump’s claims of fraud; Lou Dobbs suggested Barr was “compromised.” As other Republicans publicly accepted the results of the election, they were also be seen as “compromised” and in on the alleged conspiracy against Trump. The large number of people alleged to be involved in election fraud to help Biden leads to a conspiracy paradox. But first, a bit more set up.

About 34% of registered voters identify as independents, 33% identify as Democrats and 29% identify as Republicans. Independents tend to lean towards a party and 49% of all registered voters are either Democrats or lean that way. 44% identify as Republicans or lean that way. The party members and leaners do not always vote based on their affiliation or lean; 2016 provides a relevant example here. 5% of the Democrats and Democrat leaners jumped party to vote for Trump while 4% of Republicans and their leaners jumped party to vote for Hillary. Hillary did, after all, get millions more votes than Trump in 2016, she just got them in the wrong places.

In his first term as President, Trump had low approval ratings and his handling of the pandemic was horrible. Polls showed  that 52% of Americans were satisfied (18%) or happy (34%) that Trump lost the 2029 election. Early on Biden had a 55% approval rating. While not conclusive, this information provides evidence in support of the legitimacy of the election. That is, there are good reasons to believe that millions more people voted for Biden than voted for Trump and enough of the votes were in the right place to win the electoral college. But for the sake of the conspiracy theories, let us suppose that this view is mistaken. Given the 2016 results, the best that can be done for Trump’s side is to consider that Biden had millions more popular votes but not enough to beat Trump in the electoral college. As such, the conspiracy theory claim would be that widespread election fraud enabled Biden to win.

As noted above, Trump and his supporters claimed many people were involved in the conspiracy. While they obviously think Democrats are involved, they are added in Republicans. This number kept growing over the year. As noted above, when Barr said that the election was legitimate, he became a suspect in the conspiracy. The same held for other Republicans when they accepted the results. As such, Trump and his supporters need to claim that all these people were involved to maintain the conspiracy theories about widespread voter fraud. After all, if these Republicans are not in on the conspiracy, then that would suggest the election was legitimate. The alleged conspiracy became so large that Biden would have won if the alleged conspirators had simply voted for him in a legitimate election. This, then, is the paradox: Trump and his supporters had to expand the membership in the alleged conspiracies but doing so undermined the theory of fraud. At a certain point, the conspiracy became so large that if everyone in on the alleged conspiracy voted for Biden, then Biden would have easily won legitimately.

Trump is infamous for spewing lies and his supporters are known for believing his claims. As noted in previous essays, one of the many things that is striking about supporters professing belief in Trump’s claims is that they accept claims that are logically inconsistent or even contradictory. Two claims are inconsistent when they both cannot be true but they both could be false. This is different from two claims being contradictory: if one claims contradicts another, one must be true and the other false.

The last pandemic provides a horrific example of the ability of Trump supporters to profess belief in inconsistent claims.  Many Trump supporters claimed to believe that COVID-19 was a hoax, that it was no worse than the flu, that it was a Chinese bioweapon, that Trump did a great job with the pandemic and that Trump should get credit for the vaccine.   When Bob Woodward released tapes proving that Trump acknowledged the danger of the virus in February, many Trump supporters accepted Trump’s claim that he wanted to play down the virus to avoid a panic. His supporters defended him, claiming that great leaders have and should lie to prevent panic in the face of terrible danger. If Trump was right to lie to play down the deadly danger of the virus, then this is inconsistent with the claim that it is like the flu and inconsistent with the claim that it is a hoax. If he was right to lie because of the danger, then it is not like the flu nor is it a hoax. But if it is like the flu or a hoax, then he would not need to lie about the danger. One way to explain Trump supporters professing inconsistent beliefs is that some of them are accomplices. Another is that they are victims. I will begin with the accomplice explanation.

It is possible, even likely, that some of Trump’s supporters are aware when he is lying and perhaps even recognize when they make inconsistent claims. In this case, the inconsistency can easily be explained: they are accomplices to his lies and are repeating them. There is no inconsistency in their beliefs because they do not believe what they are claiming. There are various reasons for people to serve as his accomplices. They might want to express their allegiance to him, they might find his lies advantageous in their own grifts, they might be trolls, or they might gain some other advantage by professing belief in his lies. Not believing inconsistent claims does not make the claims consistent; it is just that the accomplices do not have inconsistent beliefs in this context.

As would be suspected, it can be difficult to prove that a supporter is an accomplice of Trump rather than a victim. While Trump pulls the curtain back and reveals things (like how Republicans want to make it harder to vote), it is unlikely that one of his accomplices would end a social media post professing belief in Trump’s claims by revealing that they do not believe the lies they just professed to believe. Sorting out the accomplices from the victims would require access to such things as private emails and recordings, things that would be difficult and perhaps illegal to acquire. In general, the accomplices are not very interesting from an epistemic standpoint since they are lying. About the only thing interesting is the epistemic problem of discerning the accomplices from the victims. Now, on to the victims.

In this context, the victims of Trump are supporters who believe his lies. These victims can be further divided into those who would change their view of Trump if they realized he was lying and those who would still support him (that is, would become accomplices). Given that Trump lies badly and blatantly even when his lies are easily exposed, my main explanation as to why these victims believe him is that they are often basing their beliefs on an appeal to authoritarian. This fallacious reasoning has the following form:

 

Premise 1: Authoritarian leader L makes claim c.

Conclusion: Claim C is true.

 

The fact that an authoritarian leader makes a claim does not provide evidence or a logical reason that supports the claim. It also does not disprove the claim because accepting or rejecting a claim because it comes from an authoritarian would both be errors. The authoritarian could be right about the claim but, as with any fallacy, the error lies in the reasoning.

A silly math example illustrates why this is bad logic:

 

Premise 1: The dear leader claims that 2+2 =7.

Conclusion: The dear leader is right.

 

Since this is bad logic, it gets its power from psychological rather than logical factors. In this case, these factors are the psychological features of authoritarian personalities. An authoritarian leader is characterized by the belief that they have a special status as a leader. At the extreme, the authoritarian leader believes that they are the voice of their followers and that they alone can lead. Or, as Trump put it, “I alone can fix it.” Underlying this is the (false) belief that they possess exceptional skills, knowledge and ability. This causes them to make false claims and mistakes.

Since the authoritarian leader is reluctant to admit errors and limits, they must be dishonest to the degree they are not delusional and delusional to the degree they are not dishonest. Trump exemplifies this with his constant barrage of untruths and incessant bragging. These claims are embraced as true by his supporters who are victims.

An authoritarian leader like Trump desires followers and fortunately for him, there are those of the authoritarian follower type. While Trump’s accomplices make use of him and assist him, they know he is lying. The authoritarian follower believes that their leader is special, that the leader alone can fix things. Thus, the followers must buy into the leaders’ delusions and lies, convincing themselves despite the evidence to the contrary. Trump’s devoted supporters incorrectly believe him to be honest and competent.

Since Trump has failed often and catastrophically, his victims must accept the deceitful explanations put forth to account for them. This requires rejecting facts and logic.  These victims embrace lies and conspiracy theories—whatever supports the narrative of Trump’s greatness and success Those who do not agree with Trump are not merely wrong but are enemies.  The claims of those who disagree are rejected out of hand, and often with hostility and insults. Thus, the followers tend to isolate themselves epistemically—which is a fancy way of saying that nothing that goes against their view of the leader ever gets in. While this explains, in part, their belief in Trump’s lies it also helps explain how they can believe inconsistent (even contradictory) claims.

Someone who forms beliefs based on the appeal to authoritarian will accept what the authoritarian tells them as true. What justifies these beliefs in the minds of the victims is that the authoritarian made them. As such, they have no reason to consider other evidence and are effectively immune to arguments against these beliefs. After all, if the justification of a belief is a matter of it being a claim made by the authoritarian, then any other evidence or argument against that claim cannot impact its justification. The only things that could undermine the belief would be if the authoritarian told their followers to accept a new belief in place of the old (for example, the authoritarian saying that a once trusted minion is now an enemy) or if the victim stopped accepting the authoritarian for some reason.  So how does this enable inconsistent beliefs?

The answer is that it does so very easily. If the victim believes a claim because the authoritarian makes the claim and other factors are irrelevant, then consistency will not matter to that victim. These beliefs are not accepted because they are backed by evidence, and they are not subject to critical assessment. As such, it would not even occur to the victim to check the claims made by the authoritarian against each other to see if they are consistent or not: these claims are simply believed, and they are believed because the authoritarian makes them. In the case of Trump supporters who are victims, this seems to be what they are doing: they believe what Trump says because Trump says it and that is good enough. It must be; if they engaged in a honest assessment and searched for the truth, they would not believe Trump’s lies. While they might bring up “evidence” and “argue” when responding to critics of Trump, these are not good faith efforts since they do not believe based on evidence (because there is none) and they will refuse all evidence and arguments that go against these beliefs. Trump’s victims believing his lies about the election and insisting there is evidence of widespread fraud is an excellent example of this. The lack of evidence has no impact on their beliefs nor does the inconsistency of some of their beliefs because all that matters is what Trump says. This, of course, is a terrible epistemic system, although it is the foundation of authoritarianism (which is what Trumpism is, at least in part).

 

On the face of it, the notion of skill transference in education sounds reasonable: if a student learns one skill, such a Latin or geometry, that requires logical thinking, then this skill should transfer to other areas involving logical thinking, such as categorical logic. Surprisingly, it seems these skills do not transfer. There have also been ill-fated attempts to find skills that would boost general intelligence, such as the idea that learning to play an instrument or chess would also make you smarter. So far, this has not worked out. While learning to play chess makes a person better at chess, it does not seem to boost general intelligence.

Because of its perceived value, there have been efforts to teach students critical thinking. At my university this is one of the competencies we assess as part of our assessment of the General Education curriculum. These is, as would be imagined, an assumption that various and diverse general education classes can teach the general skill of critical thinking. My Philosophy and Religion program also has critical thinking as a competency we assess as part of our assessment and there is, once again, an assumption that there is a general skill being taught. Interestingly, the national data and the data from my university shows that students generally do not transfer critical thinking skills. What is extremely interesting is that these skills do not seem to transfer well even within a specific discipline. For example, one might think that taking Critical Inquiry (a critical thinking class) or Logic would confer general critical thinking skills that would be retained an applied in other philosophy classes. But this is generally not the case.

While it is not surprising that very specific skills would not transfer well (for example, learning about metaphysics might not help a student much in ethics) it does seem odd that general critical skills do not transfer very well. Daniel Willingham provides an excellent analysis of this problem.

Willingham presents two excellent examples. One involves the difficultly people have with transferring an understanding of the law of large numbers in the context of randomness (such as dice) to cases such as judging academic performance. That is, a person who gets that rolling a set of dice twice will not tell you whether they are loaded or not might uncritically accept that a person who gets two bad math exam grades must be bad at math. Both scenarios involve the same sort of reasoning (inductive generalizations) but the skill does not seem to transfer between the different applications. If it did, a person who understood the dice situation would also get that a sample of two math tests is too small to support an inference about math skill.

His second example, a classic experiment, involved analogical reasoning. In this example, subjects were asked how a tumor could be treated with a ray that would cause extensive collateral damage. Before being given this problem, the subjects read a story about rebels attacking a fortress that presented an analogy to the tumor situation. Despite having the solution right in front of them, the subjects could not solve the medical problem. The researchers found that telling the subjects that the story might help solve the problem resulted in almost all the subjects being able to apply the analogy. The researchers concluded that the problem was getting the subjects to use the analogy since the analogy itself was easy to use.

Willingham draws the conclusion that, “The problem is that previous critical thinking successes seem encapsulated in memory. We know that a student has understood an idea like the law of large numbers. But understanding it offers no guarantee that the student will recognize new situations in which that idea will be useful.” So how could this connect to the ability of people to hold to inconsistent beliefs?

As noted in my previous essays on inconsistent beliefs, people are good at believing inconsistent claims. Two claims are inconsistent when they both cannot be true, but both could be false. This is different from two claims being contradictory: if one claims contradicts another, one must be true and the other false. As also noted in previous essays, my inspiration for these essays was seeing social media posts by Trump supporters presenting and professing belief in inconsistent (and sometimes contradictory claims). To illustrate, Trump supporters tended to believe Trump’s claims that COVID-19 was no worse than the flu and that it was also a hoax.  When Bob Woodward released tapes proving that Trump acknowledged the danger of the virus  many Trump supporters accepted Trump’s claim that he wanted to play down the virus to avoid a panic. His supporters defended him, claiming great leaders lie to keep morale up in the face of terrible danger (something Plato might accept, given his noble lie). They also claimed he was right to do this in order to prevent panic in the face of a deadly virus. Laying aside all the moral issues here, there is an obvious logical problem: if Trump was right to lie to play down the virus because it is a terrible danger, then this is inconsistent with the claim that it is like the flu (or a hoax). So, if he had to lie because of the danger, then it is not like the flu (or a hoax). But if it is like the flu (or a hoax) then he did not need to lie about the danger. There was a bit of unpleasant fun had in getting a Trump supporter to profess belief in these inconsistent claims in the space of a short Facebook interaction; but almost anyone can easily be caught in holding inconsistent beliefs. The transference problem can help explain some of this.

As Willingham has shown, people are generally bad at transferring critical thinking skills between different situations. Differences in content, as he noted, can prevent people from seeing what can become obvious with the right context. Because of this, a person might be very good at discerning inconsistency in specific cases but fail in other cases. As an example, consider a Trump supporter who is very good at finding inconsistencies in claims made by liberals they disagree with. They are motivated to find such problems and continued practice can make them good at finding inconsistencies in this context. But if the context is switched to their own beliefs, the change can prevent skill transference. That is, they can readily see the inconsistencies of a liberal in one context but are unable to see their own inconsistencies. This is analogous to the subjects in the analogy experiment: they had the answer right in front of them but were blind to it until it was pointed out to them.

Put in general terms, people with strong political views can practice attacking and criticizing views they disagree with and develop critical thinking skills they can apply in very specific contexts. But people rarely subject their own beliefs to intense logical scrutiny. People almost never carefully compare their core beliefs to check for logical inconsistencies and so have little practice doing so. Hence, they will tend to be bad at noticing obvious inconsistencies. This, of course, assumes that people are being honest, they hold to the beliefs they are professing and are not lying as a strategy. It is to this that I will turn in my next essay.

Unlike the thinking machines of science fiction, human beings can easily believe inconsistent (even contradictory) claims. Based on experience, I am confident I still have inconsistent beliefs and false beliefs. I do not know which ones are false or inconsistent. If I knew, I would (I hope) stop believing the false ones and sort out the inconsistencies. Writing out my ideas helps in this process because others can see my claims and assess them. If someone can show that two of my beliefs are inconsistent (or contradictory) they are helping me weed the garden of my mind. But not everyone is grateful for this sort of help. Although, to be fair, criticism can arise from cruelty rather than honest concern.

While most people do not write extensively about their beliefs, many people present beliefs on social media, such as Facebook, Bluesky and X. Being a philosopher, I have the annoying trait of checking these claims for logical inconsistency and contradictions. Two claims are inconsistent if they both cannot be true at the same time; but they could both be false. If two claims are contradictory, one must be false and the other true.

As would be suspected, the political beliefs people profess are often inconsistent or even contradictory. I have, and perhaps so have you, seen posts making inconsistent or even contradictory claims. As a classic example, it was jarring to see a post mock people who took the COVID pandemic “hoax” seriously, assert that the “China Virus” is a dangerous bioweapon, and then conclude by praising Trump’s great handling of the pandemic and accusing the Democrats of trying to steal credit for the great vaccine that Trump created. It got even stranger when 5G and QAnon was thrown into the posts. Pointing out such inconsistencies usually causes people to angrily doubling down or make threats. I invite readers to provide examples of how “the libs” also hold inconsistent sets of beliefs. But keep in mind that inconsistency is a matter of logic, and a set of false claims can be consistent with each other. So how do people believe in such sets of clearly inconsistent beliefs?  Perhaps the concept of choice blindness can shed some light on the matter.

Back in 2005 Swedish researchers developed the concept of choice blindness after conducting an experiment involving choosing between two photos of faces. Each participant was asked which photo they found most attractive and then the researcher used sleight of hand to make the participant think they had been handed back the photo they picked. But the researcher gave them the photo they had not picked. While one would expect the subject to notice the switch, they generally did not and accepted the switched photo as the one they had picked. They even offered reasons as to why they had picked that photo in the first place (though they had actually rejected it). Follow up experiments yielded the same results for the taste of jam, financial choices, and eyewitness reports.

These results could be explained away in terms of weak preferences and other factors. For example, if a person is asked to pick between two photos and at that moment, they slightly prefer one, then it would not be surprising that they would easily change their mind. But one might think that political beliefs would be different especially in these highly polarized times.  But people seem to suffer from choice blindness here as well.

In 2018 an experiment was conducted in which participants were given a survey about political questions. The researchers gave the subjects false feedback and found that their beliefs tended to shift accordingly. This effect lasted up to a week and, interestingly, lasted even longer when the researchers asked the participants to defend “their choices.” For example, a person who originally favored raising taxes would be asked by the researchers about “their” view that taxes should not be raised. This person would then tend to believe that taxes should not be raised. The researchers’ explanation is a reasonable one: if a person thinks a belief is their belief, they will be free of many  factors that would have caused them to defend their original belief. This makes sense: if someone believes they believe something, then they will tend to believe and defend it. Roughly put, people believe what they believe they believe—even when they previously did not believe it. So how can this help explain the ability to believe inconsistent or even contradictory claims?

Based on the above, a person can initially believe one claim and then be easily switched to believing (and defending) a claim inconsistent with their original belief.  For example, a person who initially believes that a carbon tax would reduce emissions could have their belief switched by this method to believing (and defending) that carbon taxes would not do that.  These two claims are inconsistent, but a person can easily be switched from one to the other without apparently even noticing.

Now consider a person who believes inconsistent claims. When they make one claim, this would be analogous to their professing their original belief in the choice blindness experiment. When they profess an inconsistent claim, this would be analogous to them professing belief in the claim they were switched to believing by the researchers. In the case of holding inconsistent beliefs, a person would be switching themselves when they switched from professing one belief to professing belief in a claim inconsistent with the first belief. As such, a person would believe the first belief and then seamlessly switch to the inconsistent belief without noticing the inconsistency. Given that the experiment shows that people can be switched to opposite beliefs without noticing, it would be easy for people to hold to inconsistent beliefs without noticing the inconsistency. They believe one belief because they believe it; they believe an inconsistent belief because they believe that as well. That is, people believe what they think they believe and simply ignore or forget any inconsistencies.  While this is certainly not the whole story, choice blindness does shed some light on the ability people have to profess inconsistent beliefs.

Being a philosopher and single once again, I have been overthinking the whole dating thing. The original version of this essay was written in 2016 after the amicable end of a long-term, long-distance relationship. This rewrite is being done in 2025 after then end of a long-term, long-distance relationship in 2024.

 Back in 2016 a random interaction back when I was 50 provided me with something new, or rather old, to think about: age and dating. In this scenario I was talking with a woman and had no intention of making any overtures or moves (smooth or otherwise). With some storytelling license in play, we join the story in progress:

 

Her: Flirt. Flirt. Flirt.

Her: “So, what do you do for work?” Flirt.

Me: “I’m a philosophy professor.”

Her: “At FSU?” Flirt.

Me: “No, literally across the tracks at FAMU.”

Her: “When did you start?” Flirt.

Me: “1993.”

Her: “1993…how old are you?”

Me: “Fifty.”

 

At this point, she dropped out of flirt mode so hard that it damaged the space-time continuum. Windows cracked. Tiny fires erupted in her hair. Car alarms went off. Pokémon died. Squirrels were driven mad and fled in terror, crying out to their dark rodent gods for salvation. Here is how the conversation ended:

 

Her: “Um, I bet my mother would like you. Oh, look at the time…I have to go now.”

Me: “Bye.”

 

While some might have found this experience ego-damaging, my friends know I have an adamantine ego.  What I took away was that I looked much younger than fifty, probably due to all that running. But what struck me most about this episode is that the radical change in her behavior was due entirely to her learning my age.  As my friend Julie commented, I had “instantly gone from sexable to invisible.” She must have incorrectly estimated that I was younger than fifty. Perhaps she had forgotten to put in her contacts. So, on to the matter of age and dating.

While some might claim that age is just a number, that is not true. Age is more than that. At the very least, it is a major factor in how people select or reject potential dates. On the face of it, the use of age as a judging factor should be seen as reasonable. The reason is, of course, that dating is largely a matter of attraction and this is strongly influenced by preferences. One person might desire the feeble hug of a needy nerd, while another might crave the crushing embrace of a jock dumb as a rock. Some might swoon for eyes so blue, while others might have nothing to do with a man unless he rows crew. Likewise, people have clear preferences about age. In general, people prefer those close to them in age, unless there are other factors in play. Men, so the stereotype goes, have a marked preference for younger women. Women, so the stereotype goes, will tolerate a wrinkly old coot if he has stacks of the finest loot.

Preferences in dating are, I would say, analogous to preferences about food. One cannot be wrong about these and there are (usually) no grounds for condemning or praising such preferences. If Sally likes steak and tall guys, she just does. If Sam likes veggie burgers and winsome blondes, he just does. As with food preferences, there is little point in trying to argue as people like what they like and dislike what they dislike. That said, there are some things that might seem to go beyond mere preferences. To illustrate, I will offer some examples.

There are white people who would never date a black person. There are black people who would never date anyone but another black person. There are people who would never date a Jew. There are others for whom only a Jew will do. Depending on the cause of these preferences, they might be better categorized as biases or even prejudices. But it is worth considering that these might be benign preferences. That, for example, a white person has no racial bias, they just prefer light skins to dark skins for the same sort of reason they might prefer brunettes to blondes. Then again, they might not be so benign.

People are chock full of biases and prejudices, so it should come as no surprise that they influence dating behavior. On the one hand, it is tempting to accept these prejudices on the grounds that dating is entirely a matter of personal choice.  On the other hand, it could be argued that prejudices are problematic even in the context of dating. This is not to claim that people should be subject to compelled diversity dating, just that perhaps they should be criticized.

When it comes to alleged prejudices, it is worth considering that they might be a matter of innocent ignorance as the person lacks correct information. Assuming the person is not willfully and actively ignorant, this is not to be condemned as a moral flaw since it can be easily fixed by the truth. To go back to the food analogy, imagine that Jane prefers Big Macs because she thinks they are healthy and refuses to eat avocadoes because she thinks they are unhealthy. Given what she thinks, it is reasonable for her to eat Big Macs and avoid avocadoes. If she knew the truth, she would change her eating habits since she wants to eat healthily. She is merely ignorant. Likewise, if Jane believed that black men are all uneducated thugs, then it would seem reasonable for her to not to want to date a black man given what she believes. If she knew the truth, her view would change. As such, she is not prejudiced, just ignorant.

It is also worth considering that an apparent prejudice is a real prejudice, that the person would either refuse to accept facts or would still maintain the same behavior in the face of the facts. As an example, suppose that Sam thinks that white people are complete racists and thus refuses to even consider dating one. While it is often claimed that everyone is racist, not all white people are complete racists. As such, if Sam persisted in his belief or behavior in the face of the facts, then it would be reasonable to condemn him for his prejudice.

Finally, it might even be the case that the alleged prejudice is rational and well founded. To use a food analogy, a person who will not eat raw steak because she knows the health risks is not prejudiced but quite reasonable. Likewise, a person who will not date a person who is a known cheater is not prejudiced but rational.

But what about age? The easy and obvious answer is that it can fall into all three of the categories discussed above. If a person’s dating decisions are based on incorrect information about age, then they have made an error of ignorance. If a person’s decisions are based on mere prejudice, then they have made a moral error. But, if the decision regarding age and dating is rational and well founded, then the person would have made a good decision. As should be suspected, the specifics of the situation are what matter. That said, there are some general categories relating to age that are worth considering.

While I was fifty when I wrote the first version of this essay, I am now fifty-nine. So, I am considering these matters from the perspective of an even older person.  Honesty compels me to admit that I am influenced by my own biases here and, as my friend Julie pointed out in 2016, older men are full of delusions about age. I presumably have an extra nine years of delusions. However, I will endeavor to be objective and will lay out my reasoning.

The first is the matter of health. In general, as people get older, their health declines. For example, older people are more likely to have colon cancer. Hence people who are not at risk usually do not get colonoscopies until fifty (although the recommendation now seems to be 45). Because of this, it is reasonable for a younger person to be concerned about dating someone older as that person is more likely to get ill and die. That said, an older person can be much healthier than a younger person. As such, it might come down to whether a person looks at dating option broadly in terms of categories of people (such as age or ethnicity) or is more willing to consider individuals who might differ from the stereotypes and statistics of these categories. Using categories does help speed up decisions, although doing so might result in missed opportunities. But, there are billions of humans and so categories can be useful if one wants to narrow their focus.

While an older person might not be sick, age does weaken the body. For example, I remember being bitterly disappointed by a shameful 16:28 5K in my youth. Now I must struggle to maintain that pace for a quarter mile. Back then I could easily do 90-100 miles a week; now I run a mere 20-50 and must row to get in the rest of my miles. Time is cruel. For those who are concerned about a person’s activity levels, age is a relevant factor and provides a reasonable basis for not dating an older (or younger) person that is neither an error nor a prejudice. However, an older person can be far fitter and active than a younger person, so that is worth considering before rejecting an entire category of people.

Life expectancy is also part of health concerns. A younger person interested in a long-term relationship would need to consider how long that long term might be and this is rational. To use an obvious analogy, when buying a car, one should consider the miles on it. Women also live longer than men, so that is a consideration as well. Since I am 59-year-old American living in Florida, the statistics say I have about 14.1 years left. Death sets a clear limit to how long term a relationship can be. But life expectancy and quality of life are influenced by many factors, and they might be worth considering. Or not. Because, you know, death.

The second broad category is that of interests and culture. Each person is born into a specific temporal culture and that shapes their interests. For example, musical taste is influenced by this, and older folks famously differ in their music from younger folks. What was once rebellious rock became a golden oldie suitable to be played in Publix. Fashion is also very much a matter of time, although styles have a way of cycling back into vogue, like those bell bottoms. Thus, people who differ in age are people from different cultures and that presents a real challenge. An old person who tries to act younger typically only succeeds in appearing absurd. One who does not try will presumably not fit in with a younger person. So, either way is a path to failure.

There is also the fact that interests change as a person gets older. To use some stereotypes, older folks are supposed to love shuffleboard and bingo while younger folks are into things that would presumably kill or baffle old people, like video games and Snapchat. Party behavior also differs. It could be countered that there can be shared interests between people of different ages and that a lack of shared interests is obviously not limited to those who differ in age. The response is that perhaps the age difference would generally result in too much of a difference in interests, thus making avoiding dating people who differ enough in age rational and reasonable.

The third broad category consists of concerns about disparities in power. An older adult will typically have a power advantage over a younger adult, and this raises moral concerns about exploitation. But there is also the reverse concern: that a younger person will exploit an older person. Because of this, a younger adult should be rightly concerned about being at a disadvantage relative to an older person. Of course, this concern is not just limited to age. If the concern about power disparity is important, then it also applies to disparities in education, income, and abilities between people in the same age group. That said, the disparities would tend to be increased with an age difference. As such, it is reasonable to be concerned about this factor.

The fourth broad category is the “ick factor.” While there is some social tolerance for rich old men having hot young partners, people dating or attempting to date outside of their socially defined age categories can be condemned because it is seen as “icky” or “gross.” Back when I was in graduate school, I remember people commenting on how gross it was for old faculty to hook up with graduate students. Laying aside exploitation and unprofessionalism, it did seem gross. As such, the ick argument has appeal. But there is the question of whether the perceived grossness is founded or not. On the one hand, it can be argued that grossness is in the eye of the beholder or that grossness is set by social norms and these serve as proper foundations. On the other hand, it could be contended that the perception of grossness can be unfounded prejudice. On the third hand, the grossness could be cashed out in terms of the above categories. For example, it is icky for an unhealthy and weak rich man to date a hot, healthy young woman with whom he has no real common interests (beyond money, of course). He should be dating an unhealthy, weak, old woman with whom he has common interests.

My long-term, long-distance relationship came to an amicable end in May of 2024, thus briefly tossing me back into the world of dating before I gave up. This is the sequel to a similar ending with a different person back in 2016, allowing me to revisit what I wrote back then.

Since starting and maintaining a relationship is a lot of work (if not, you are either lucky or doing it wrong), I think it is important to consider whether relationships are worth it. One obvious consideration is the fact that most romantic relationships end well before death.  Even marriage, which is supposed to be the most solid of relationships, tends to end in divorce. I am divorced; my smart and ambitious wife took an excellent academic job in California and then divorced me in 2004 when she could no longer do the long-distance thing. I definitely have a type.

While there are many ways to look at the ending of a relationship, there are two main approaches. One is to consider the relationship a failure. This can be seen as trying to write a book and not finish: all that work poured into it, yet it remains incomplete. Another obvious analogy is with running a marathon and not finishing. While great effort was expended, it ended in failure.

Another approach is to consider the ending more positively: the relationship ended but was completed. Going back to the analogies, it is like completing that book you are writing or finishing that marathon. True, it has ended, but it is supposed to end.

When my previous relationship ended in 2016, I initially looked at it as a failure: all that effort invested and it ended because, despite two years of trying, we could not get academic jobs in the same geographical area. However, I tried to look at it in a more positive light: although I would have preferred that it did not end, it was a very positive relationship, rich with wonderful experiences and helped me to become better as a human being. There still, of course, remains the question of whether it is worth being in another relationship. As a spoiler, I did meet another wonderful person, a smart ambitious woman who moved away and decided that the long-distance relationship was too much. I guess that is a double spoiler.

One way to address this is in the context of biology and evolution. Humans are animals that need food, water and air to survive. As such, there is no real question about whether food, water and air are worth it, one is simply driven to possess them. Likewise, humans are driven by their biology to reproduce, and natural selection seems to have selected genes that mold brains to engage in relationships. As such, there is no real question of whether they are worth it, humans have relationships. This answer is, of course, rather unsatisfying since a person can, it would seem, make the choice to be in a relationship or not. There is also the question of whether relationships are worth it. This is a question of value and science is not the realm where such answers lie. Value questions belong to such areas as moral philosophy and aesthetics. So, on to value.

The question of whether relationships are worth it or not is like asking whether technology is worth it: the question is too broad. While some might endeavor to give sweeping answers to these broad questions, such an approach would be problematic and unsatisfying. Just as it makes sense to be more specific about technology (such as asking if ChatGPT is worth the cost), it makes more sense to consider whether a specific relationship is worth it. That is, there seems to be no general answer to the question of whether relationships are worth it or not, it is a question of whether a specific relationship would be worth it.

It could be countered that there is, in fact, a legitimate general question. A person might see any likely relationship to not be worth it. For example, I know many professionals who have devoted their lives to their careers and have no interest in relationships. They say they do not consider romantic involvement to have much, if any value. A person might also regard a relationship as a necessary part of their well-being. While this might be due to social conditioning or biology, there are certainly people who consider almost any relationship worth it.

These counters are reasonable, but it can be argued that the general question is best answered by considering specific relationships. If no specific possible (or likely) relationship for a person would be worth it, then relationships in general would not be worth it. So, if a person honestly considered all the relationships she might have and rejected all of them because their value is not sufficient, then relationships would not be worth it to her. As noted above, some people take this view.

If at least some possible (or likely) relationships would be worth it to a person, then relationships would thus be worth it. This leads to an obvious point: the worth of a relationship depends on that specific relationship, so it comes down to weighing the negative and positive aspects. If there is a sufficient surplus of positive over the negative, then the relationship would be worth it.

As should be expected, there are many serious epistemic problems here. How does a person know what would be positive or negative? How does a person know that a relationship with a specific person would be more positive or more negative? How does a person know what they should do to make the relationship more positive than negative? How does a person know how much the positive needs to outweigh the negative to make the relationship worth it? And, of course, many more concerns. Given the challenge of answering these questions, it is no wonder that so many relationships fail. There is also the fact that each person has a different answer to many of these questions, so getting answers from others will tend to be of little real value and could lead to problems. Back in 2016, I had given up on relationships until I was inspired to try again. As I write this, I am once again in a state of doubt.

An odd thing about the American far right is they often seem to be buffoons doing absurd things. One example is the fascist organization Proud Boys. While this is a domestic terrorist group known for violence, it is also known for its wacky rules and rituals. They have a strict rule about masturbation and a ritual in which they punch a member while shouting out the names of breakfast cereals. They also seem to LARP by dressing up with a Call of Duty look and have an order of “Alt Knights.” As such, they can appear as a bunch of loonies.

As a second example, Trump puts on a masterful show of buffoonery. He maintains an odd orange skin tone which has led to speculation that it is a spray on tan. His COVID press conferences were master performances in absurdity with bizarre claims made in front of the cameras. His bumbling of basic language and expression of ignorance about basic facts relevant to his job are also an impressive performance of buffoonery.

As a third example, Rudy Giuliani rivaled his master in his buffoonery. He crazily advanced unsupported conspiracy theories, filed unfounded lawsuits with typos, held a press conference at a landscaping business, and in a brilliant stroke of foolery held a press conference with what might be mascara (rather than hair dye) running down the sides of his face. Comedians are hard pressed to parody the right. While it is tempting to dismiss this buffoonery as arising from stupidity, it is worth considering it is a strategy. So, what are possible advantages of buffoonery as a political tool?

One advantage is that ridiculous behavior can make someone seem less dangerous or even harmless. Take, for example, the Proud Boys. Their breakfast cereal beating and “no wank” rules make them seem silly. How could such silly people be domestic terrorists? In the case of Rudy and his ilk, their incompetent buffoonery makes them seem silly. How could some crazy guy with mascara running down his face have harmed American democracy? The defense against this is to realize that even buffoons can be dangerous, especially when their buffoonery is directed by non-buffoons and used as a cover.

A second advantage of buffoonery is that it distracts people from serious matters. Trump’s constant buffoonery draws attention away from the harmful and corrupt things going on under his reign. As many have said, Trump sucks up all the oxygen and dominates the news cycle and thus important stories get little or no attention. In the case of the Proud Boys, their buffoonery distracts from their violence. In the case of Rudy and his ilk, their buffoonery distracts from the deeper stories of the undermining of American democracy in favor of authoritarianism.  This tactic is analogous to that used by pickpockets and magicians: they often use an assistant to distract the target so that they can accomplish their goal. The defense is to resist the lure of the buffoonery but this is hard for most of the media as they need to capture an audience.

A third advantage is that buffoonery makes it harder for the opponents of the far right to convince others that these people are a threat. This strategy is presented in the X-Files episode José Chung’s From Outer Space. In this episode, Mulder and Scully run into the Men in Black. Alex Trebek plays one of them but he is supposed to just look like Alek Trebek rather than be Alex Trebek. That is, he is playing someone who is playing him. This is done because the Men in Black are supposed to appear so ridiculous that any story told about them will seem absurd and unbelievable. To use the Proud Boys as an example, if someone tries to explain that this “no wank” group of breakfast cereal shouters is a real threat, they might seem crazy.

Folks on the right also use coded language, dog-whistles, and euphemisms to produce a similar effect. Because of this strategy, trying to explain the right to “normies” can make a person seem crazy. Phrases and terms like “bad hombres”, “law and order”, “inner cities”, “suburban housewives”, “America First”, “international bankers”, and such appear innocent to those ignorant of the code and the context. For example, when Trump talks about “law and order” in the “inner cities” he is usually talking about using the police to oppress black Americans. When a right-wing group talks about “international bankers” and “Soros” they are usually engaging in antisemitism. As I can attest to from my own experience, trying to explain dog whistles and coded language to “normies” results in incredulous stares which is exactly what the right intends. Overcoming this is challenging, especially since the right adapts when their dog whistles and coded language is exposed to the mainstream. But this is something that needs to be done and one hopes that more people become aware of what the right is trying to do and can decode their language even when the right adapts.

QAnon is essentially a conspiracy street sausage: scrap and leftovers of past conspiracies wrapped in the intestines of an apocalyptic cult and served up to people who are not careful about what gets into their minds. But it is also a fascinating bit of story design that mirrors classic techniques used to write horror adventures in role-playing games and tales of terror.

Put a bit simply, QAnon is a conspiracy theory that there is a worldwide cabal of Satan-worshiping (possibly cannibal) pedophiles operating a sex-trafficking ring. Since these are criminal activities universally condemned as morally horrific, the story of QAnon should be in the police procedural genre. If the evidence QAnon claims existed, then there should be worldwide arrests with public support. While there have been arrests and investigations  featuring the likes of Jeffrey Epstein and Ghislaine Maxwell, there is no evidence of activity against this alleged cabal. This is not surprising as the authors of the conspiracy seem to be using a classic technique used in horror adventures in games and stories, that of negating the authorities to make room for the heroes.

In horror role-playing games such as Call of Cthulhu, one takes on the role of a hero attempting to thwart or at least delay the machinations of evil. One practical concern is providing a rational explanation as to why the heroes are the ones who must save the day. The heroes are usually just a random collection of people thrown into horror. They are almost never in positions of meaningful power or authority. As such, they are not the ones to take on the job because they have an army or police force to get it done for them. They are the ones to do the job because they are the heroes. There must also be a rational explanation that explains why the authorities are not the ones solving the problem, otherwise there would be no need for the heroes. There are a variety of ways to handle this negation of authority.

One classic is isolation. The heroes are someplace where there are no authorities who can save the day. They might, for example, be in a cabin deep in the Maine woods with no phone service or working transportation. They might, as another example, be on a damaged ship with no power and no working radio. QAnon does not use this approach, since it would not work with their horror story.

A second approach is that the authorities are unwilling or unable to help. They might be too afraid to act, they might be too weak to act, they might not know what is occurring, or the heroes might not have the evidence needed to convince them to act. In some cases, the heroes intentionally avoid involving the authorities when they believe that the authorities simply cannot handle the situation, and they do not want to get people needlessly killed.  In the case of QAnon, they think that some people in power are not part of the cabal but are also not among the heroes supposedly fighting it.

A third classic approach is that the authorities are part of the conspiracy: they are not solving the problem because they are the problem. One practical concern here is that the authorities cannot be too powerful, otherwise the heroes would not stand a chance against them. QAnon does claim that at least some of those in power are part of the conspiracy. They address the power concern by making Donald Trump their main hero: he can fight against the pillars of the community because he is the President and has the military and executive branch at his disposal. This does create another sort of problem: since Trump has such overwhelming force the adventure of QAnon should have ended almost immediately with Trump saving the day immediately. As a game, it would have gone down like this:

 

Keeper: “Okay, you are the President of the United States and have overwhelming evidence of a cabal of pedophile sex-traffickers.”

Player: “I give the FBI director a call and send him all the evidence.”

Keeper: “Great! The cabal members are soon arrested and your approval rating skyrockets!”

Player: “How much XP is that?”

Game Master: “This is Call of Cthulhu; you just get skill improvement rolls.”

 

As such, QAnon must explain why their hero has not used his overwhelming power to solve the problem. This requires another technique, delayed resolution. In a story or horror adventure, an immediate and easy solution is not satisfying and so the resolution must be delayed. To steal from Aristotle, the resolution cannot come too quickly—this makes the story too short, and it will fail to satisfy. It also cannot take too long since dragging the problem out will become tedious and strain plausibility. As such, the ideal is to be neither too short nor too long but to be just right.

QAnon has attempted to delay the resolution by explaining that Trump needed time to plan and organize what they call the “Storm.” On this day Trump will finally spring into action and the cabal members will be arrested. In a horror adventure, the game master delays resolution in various ways, such as having minor villains that must be vanquished, investigations that must be conducted and red herrings that distract the heroes. In the case of QAnon, the delayed resolution seems rather too delayed: Trump is into his second terms and there has not been a drop of rain, let alone a Storm. The Storm is tomorrow and always will be.

QAnon has, of course, a long list of failed predictions and has persisted (as such cults do) through these failures; but the Storm is critical to Trump remaining the hero. A good analogy is to consider what happens to apocalyptic groups when the date of their apocalypse comes and goes without an apocalypse: they tend to collapse. QAnon has, amazingly, been able to persist despite these failed predictions and remains active today. This, I admit, has surprised me. But perhaps playing QAnon is so addictive that they cannot stop.

Epistemology is a branch of philosophy concerned with theories of knowledge. The name is derived from the Greek terms for episteme (knowledge) and logos (explanation). Epidemiology is the study and analysis of the distribution, patterns and determinants of health and disease conditions in defined populations. While the names of the two fields sound alike, they are obviously different. But I propose a subbranch of epistemology that could be called “epistemic epidemiology” or perhaps given a silly name like “epistidemology.” This subbranch would not be focused on the epistemic features of epidemiology (which would also be interesting). It would not be about knowledge of diseases but about diseases of knowledge.

These diseases of knowledge can include corruption or infection of normally healthy epistemic systems as well as epistemic systems that are fundamentally pathological in nature. One goal of this subbranch would be to work out descriptive accounts of various epistemic diseases as well as theories of how such diseases arise, spread, and do damage. There would also be descriptive accounts of epistemic systems that are inherently pathological. Of special interest would be the nature and causes of epistemepidemics which are widespread epistemic pathologies in populations.

This subbranch, I propose, should be more than descriptive. Like ethics (and medicine) it should also be prescriptive: epistemic pathologies should be analyzed with the aim of curing (or replacing) them, so that people can have healthy belief forming systems. As would be expected, doing prescriptive epistemology will involve disputes and controversies like those in ethics and arguments will be needed to defend claims about which epistemic systems are pathological and how they might be treated. Fortunately, there are already two established areas of thought that will be useful here.

One area is what epistemologists call the ethics of belief (thanks to William Clifford). This area deals with such matters as the moral obligations we might have when forming beliefs. In fact, it could be argued that there is no need for epistemic epidemiology since the ethics of belief already covers the normative aspects of epistemology. While this view is reasonable, while epistemic epidemiology includes normative components it also covers non-normative areas that are not covered by the ethics of belief. An obvious example is that the ethics of belief does not address questions of why pathological epistemologies can be so widespread. So, just as medical ethics and medical epidemiology are distinct, the same holds for the ethics of belief and epistemic epidemiology.

A second area is the realm of logic, with special attention on critical thinking methods. While people can engage in endless debates about epistemic theories, what counts as defective (even pathological) reasoning is well established. Someone who insists on forming beliefs based solely on rhetoric would be in error; someone who insists on forming beliefs based on fallacies would seem to be pathological (pun intended). As such, logic provides an excellent toolkit, much like medical techniques provide an excellent tool kit for medical epidemiologists.

There would certainly seem to be important roles in this field for findings from neuroscience, psychiatry, and psychology. For example, delusional disorder is a serious mental illness that has a profound impact on a person’s epistemic systems: they claim to have knowledge of something that is not true and will persist even in the face of evidence that should logically undermine their false belief. This is not to claim that all or even most false beliefs or epistemic flaws arise from mental illness but that the science of how such epistemically connected illnesses (might) work would be especially useful to addressing epistemic issues in general. Naturally, this matter must be addressed with due sensitivity and there is the obvious worry that the unscrupulous might weaponize claims about mental illness. Example of this sort of thing include when critics of President Trump are accused of having Trump Derangement Syndrome or when Trump supporters are accused of being mentally ill because of their support for Trump. This is, of course, analogous to how people use claims of disease to demonize migrants.

While it is essential to guard against weaponizing epistemic epidemiology, it is also important to be willing to apply it to outbreaks of epistemic pathologies. To use a terrifying analogy, can you imagine what would happen if the response to a medical pandemic were hijacked by political ideology and the scientific response was derailed?  As with disease outbreaks, the appropriate approach is to not engage in demonizing those impacted but by taking an objective approach aimed at analyzing and (if possible) recommending treatments. While there have long been widespread epistemic pathologies, the rise of mass media and social media have enabled these pathologies to become pandemics, and some are global in nature. National and global conspiracy theories provide excellent examples of the likely presence of pathological epistemic systems, though it is worth considering that even healthy epistemic systems can generate many false beliefs.

As with addressing medical pandemics, addressing epistemic pandemics is essential for the health, safety, and well-being of humanity. While philosophers have long struggled to help inoculate people with good logic, we must accept that a global effort is needed to address what is now a global problem. The first step is the easiest, which is the creation of this subbranch of epistemology.