Doubling down occurs when a person is confronted with evidence against a belief and their belief, rather than being weakened, is strengthened.A plausible explanation of doubling down rests on Leon Festinger’s classic theory of cognitive dissonance. When a person has a belief that is threatened by evidence, she has two main choices. The first is to adjust her belief in accord with the evidence. If the evidence is plausible and strongly supports the inference that the belief is false, then it is rational to reject the old belief. If the evidence is not plausible or does not strongly support the inference that the belief is false then it is rational to stick with the threatened belief on the grounds that the threat is not much of a threat.

Assessment of what is plausible evidence can be problematic. In general terms, assessing evidence involves considering how it matches one’s own observations, one’s background information, and credible sources. This assessment can push the matter back: the evidence for the evidence also needs to be assessed, which fuels classic skeptical arguments about the impossibility of knowledge. Every belief must be assessed, which leads to an infinite regress, thus making knowing whether a belief is true impossible. Naturally, retreating into skepticism will not help when a person is responding to evidence against a beloved belief (unless the beloved belief is a skeptical one)—the person wants her beloved belief to be true. As such, someone defending a beloved belief needs to accept that there is some support for the belief—even if the basis is faith or revelation.

In terms of assessing the reasoning, this is objective if it is deductive logic.  Deductive logic is such that if an argument is doing what it is supposed to do (be valid), then if the premises are true, then the conclusion must be true. Deductive arguments can be assessed by such things as truth tables, Venn diagrams and proofs; thus, the reasoning is objectively good or bad. Inductive reasoning is different. While the premises of an inductive argument are supposed to support the conclusion, inductive arguments are such that true premises only make (at best) the conclusion likely to be true. Inductive arguments vary in strength and while there are standards for assessing them, reasonable people can disagree about the strength of an inductive argument. People can also embrace skepticism here, specifically the problem of induction: even when an inductive argument has all true premises and the reasoning is as good as inductive reasoning gets, the conclusion could still be false. The obvious problem with trying to defend a beloved belief with the problem of induction is that it also cuts against the beloved belief—while any inductive argument against the belief could have a false conclusion, so could any inductive argument for it. As such, a person who wants to hold to a beloved belief in a way that is justified would seem to need to accept argumentation. Naturally, a person can embrace other ways of justifying beliefs—the challenge is showing that these ways should be accepted. This would seem, ironically, to require argumentation.

A second option is to reject the evidence without honestly assessing it and rationally considering the logic of the arguments. If a belief is very important to a person, perhaps even central to her identity, then the cost of giving up the belief would be very high. If the person thinks (or feels) that the evidence and reasoning cannot be engaged fairly without risking the belief, then the person can reject the evidence and reasoning using various techniques of self-deception and bad logic (fallacies serve well here).

 This rejection has less psychological cost than engaging the evidence and reasoning but is not always consequence free. Since the person probably has some awareness of their self-deception, it needs to be psychologically “justified”, and this results in a strengthening of the commitment to the belief. There are many cognitive biases that help here, such as confirmation bias (seeking, interpreting, and remembering information to confirm existing beliefs) and other forms of motivated reasoning. These can be hard to defend against, since they derange the very mechanisms that are needed to avoid them.

One interesting way people “defend” beliefs is by categorizing the evidence against the beliefs and opposing arguments as unjust attacks, which strengthens their resolve in the face of perceived hostility. After all, people fight harder when they believe they are under attack. Some people even infer they must be right because they are being criticized. As they see it, if they were not right, people would not be trying to show that they are in error. One variation of this is when a person claims they must be right because everyone disagrees with them.

People also, as John Locke argued in his work on enthusiasm, take the strength of their feelings about a belief as evidence for its truth. When people are challenged, they often feel angry and this makes them feel even more strongly. Hence, when they “check” on the truth of the belief using the measure of feeling, they feel even stronger that it is true. However, how they feel about it (as Locke argued) is no indication of its truth. Or falsity.

As a closing point, one intriguing rhetorical tactic is to accuse a person who disagrees with you of being the one who is doubling down. This accusation, after all, comes with the insinuation that the person is irrationally holding to a false belief. A reasonable defense is to show that evidence and arguments are being used to support a belief. The unreasonable counter is to employ the very tactics of doubling down and refuse to accept such a response. That said, it is worth considering that one person’s double down is often another person’s considered belief. Or, as it might be put, I support my beliefs with facts and logic while my opponents double down.

While asteroid mining is still science fiction, companies are already preparing to mine the sky. While space mining sounds awesome, lawyers are murdering the awesomeness with legalize. Long ago, President Obama signed the U.S. Commercial Space Launch Competitiveness Act which seemed to make asteroid mining legal. The key part of the law is that “Any asteroid resources obtained in outer space are the property of the entity that obtained them, which shall be entitled to all property rights to them, consistent with applicable federal law and existing international obligations.” More concisely, the law makes it so that asteroid mining by U.S. citizens would not violate U.S. law.

While this would seem to open the legal doors to asteroid mining, there are still legal barriers, although the law is obviously make-believe and requires that people either are willing to follow it or the people with guns are willing to shoot people for not following it. Various space treaties, such as the Outer Space Treaty of 1967, do not give states sovereign rights in space. As such, there is no legal foundation for a state to confer space property rights to its citizens based on its sovereignty. However, the treaties do not seem to forbid private ownership in space—as such, any other nation could pass a similar law that allows its citizens to own property in space without violating the laws of that nation. Obviously enough, satellites are owned by private companies and this could set a precedent for owning asteroids, depending on how clever the lawyers are.

One concern is that if several nations pass such laws and people start mining asteroids, then conflict over valuable space resources will be all but inevitable. In some ways this will be a repeat of the past: the more technologically advanced nations engaged in a struggle to acquire resources in an area where they lack sovereignty. These past conflicts tended to escalate into wars, which is something that must be considered in the final frontier.

One way to try to avoid war over asteroids is new treaties governing space mining. This is, obviously enough, a matter that will be handled by space lawyers, governments, and corporations. Unless, of course, AI kills us all first. Then they can sort out asteroid mining.

While the legal aspects of space ownership are interesting, its moral aspects of ownership are also of concern. While it might be believed that property rights in space are entirely new, this is not the case. While the setting is different than in the original, the matter of space property matches the state of nature scenarios envisioned by thinkers like Hobbes and Locke. To be specific, there is an abundance of resources and an absence of authority. As it now stands, while no one can hear you scream in space, there is also no one who can arrest you for space piracy as long as you stay in space.

Using the state of nature model, it can be claimed that there are currently no rightful owners of the asteroids, or it could be claimed that we are all the rightful owners (the asteroids are the common property of all of humanity). 

If there are currently no rightful owners, then the asteroids are there for the taking: an asteroid belongs to whoever can take and hold it. This is on par with Hobbes’ state of nature—practical ownership is a matter of possession. As Hobbes saw it, everyone has the right to all things, but this is effectively a right to nothing—other than what a person can defend from others. As Hobbes noted, in such a scenario profit is the measure of right and who is right is to be settled by the sword.

While this is practical, brutal and realistic, it is a bit problematic in that it would, as Hobbes also noted, lead to war. His solution, which would presumably work as well in space as on earth, would be to have sovereignty in space. This would shift the war of all against all in space (of the sort that is common in science fiction about asteroid mining) to a war of nations in space (which is also common in science fiction). The war could, of course, be a cold one fought economically and technologically rather than a hot one fought with mass drivers and lasers.

If asteroids are regarded as the common property of humanity, then Locke’s approach could be taken. As Locke saw it, God gave everything to humans in common, but people must acquire things from the common property to make use of it. Locke gives a terrestrial example of how a person needs to make an apple her own before she can benefit from it. In the case of space, a person would need to make an asteroid her own to benefit from the materials it contains.

Locke sketched out a basic labor theory of ownership—whatever a person mixes her labor with becomes her property. As such, if asteroid miners located an asteroid and started mining it, then the asteroid would belong to them.  This does have some appeal: before the miners start extracting the minerals from the asteroid, it is just a rock drifting in space. Now it is a productive mine, improved from its natural state by the labor of the miners. If mining is profitable, then the miners would have a clear incentive to grab as many asteroids as they can, which leads to the moral problem of the limits of ownership.

Locke does set limits on what people can take in his proviso: those who take from the common resources must leave as much and as good for others. When describing this to my students, I always use an analogy to a party: since the food is for everyone, everyone has a right to the food. However, taking it all or taking the very best would be wrong (and rude). While this proviso is ignored on earth, the asteroids could provide us with a fresh start in terms of dividing up the common property of humanity. After all, no one has any special right to claim the asteroids—so we all have equal good claims to the resources they contain.

As with earth resources, some will contend that there is no obligation to leave as much and as good for others in space. Instead, those who get there first will contend that ownership should be on the principle of whoever grabs it first and can keep it is the “rightful” owner. Unless someone grabs it from them, then they would presumably see that as a cruel injustice.

Those who take this view would probably argue that those who get their equipment into space would have done the work (or put up the money) and (as argued above) would be entitled to all they can grab and use or sell. Other people are free to grab what they can, if they have access to the resources needed to mine the asteroids. Naturally, the folks who lack the resources to compete will remain poor—their poverty will, in fact, disqualify them from owning any of the space resources much in the way poverty effectively disqualifies people on earth from owning earth resources.

While the selfish approach will be appealing to those who can grab the asteroids, arguments can be made for sharing them. One reason is that those who will mine the asteroids did not create the means to do so from nothing. Reaching the asteroids will be the result of centuries of human civilization that made such technology possible. As such, there would seem to be a general debt owed to human civilization and paying this off would involve also contributing to the general good of humanity. Naturally, this line of reasoning can be countered by arguing that the successful miners will benefit humanity when their profits “trickle down” from space. Sadly, as on earth, gravity does not seem to affect money in terms of trickling it down. It always seems to go upwards.

Another way to argue for sharing the resources is to use an analogy to a buffet line. Suppose I am first in line at a buffet. This does not give me the right to devour everything I can with no regard for the people behind me. It also does not give me the right to grab whatever I cannot eat myself to sell it to those who had the misfortune to be behind me in line. As such, these resources should be treated in a similar manner, namely fairly and with some concern for those who are behind the first people in line.

Naturally, these arguments for sharing can be countered by the usual arguments in favor of selfishness. While it is tempting to think that the vastness of space will overcome selfishness (that is, there will be so much that people will realize that not sharing would be absurd and petty), this seems unlikely—the more there is, the greater the disparity is between those who have and those who have not. On this pessimistic view we already have all the moral and legal tools we need for space—it is just a matter of changing the wording a bit to include “space.”

In the previous essay on threat assessment, I looked at the influence of availability heuristics and fallacies related to errors in reasoning about statistics and probability. This essay continues the discussion by exploring the influence of fear and anger on threat assessment.

A rational assessment of a threat involves properly considering how likely it is that a threat will occur and, if it occurs, how severe the consequences might be. As might be suspected, the influence of fear and anger can cause people to engage in poor threat assessment that overestimates the likelihood or severity of a threat.

One starting point for anger and fear is the stereotype. Roughly put, a stereotype is an uncritical generalization about a group. While stereotypes are generally thought of as being negative (that is, attributing undesirable traits such as laziness or greed), there are also positive stereotypes. They are not positive in that the stereotyping itself is good. Rather, the positive stereotype attributes desirable qualities, such as being good at math or skilled at making money. While it makes sense to think that stereotypes that provide a foundation for fear would be negative, they often include a mix of negative and positive qualities. For example, a feared group might be cast as stupid and weak, yet somehow also incredibly cunning and dangerous.

Stereotyping leads to similar mistakes that arise from hasty generalizations in that reasoning about a threat based on stereotypes will often result in errors. The defense against a stereotype is to seriously inquire whether the stereotype is true or not.

Stereotyping is useful for demonizing. Demonizing, in this context, involves unfairly portraying a group as evil and dangerous. This can be seen as a specialized form of hyperbole in that it exaggerates the evil of the group and the danger it represents. Demonizing is often combined with scapegoating—blaming a person or group for problems they are not responsible for. A person can demonize on their own or be subject to the demonizing rhetoric of others.

Demonizing presents a clear threat to rational threat assessment. If a group is demonized successfully, it will be (by definition) seen as eviler and more dangerous than it really is. As such, both the assessment of the probability and severity of the threat will be distorted. For example, the demonization of Muslims by various politicians and pundits distorts threat assessments.

The defense against demonizing is like the defense against stereotypes—a serious inquiry into whether the claims are true. It is worth noting that what might seem to be demonizing might be an accurate description. This is because demonizing is, like hyperbole, exaggerating the evil of and danger presented by a group. If the description is true, then it would not be demonizing. Put informally, describing a group as evil and dangerous need not be demonizing. For example, descriptions of Isis as evil and dangerous were generally accurate. As are descriptions of evil and dangerous billionaires.  

While stereotyping and demonizing are rhetorical devices, there are also fallacies that distort threat assessment. Not surprisingly, one is scare tactics (also known as appeal to fear). This fallacy involves substituting something intended to create fear in the target in place of evidence for a claim. While scare tactics can be used in other ways, it can be used to distort threat assessment. One aspect of its distortion is the use of fear—when people are afraid, they tend to overestimate the probability and severity of threats. Scare tactics is also used to feed fear—one fear can be used to get people to accept a claim that makes them even more afraid.

One thing that is especially worrisome about scare tactics in the context of terrorism is that in addition to making people afraid, it is also routinely used to “justify” encroachments on rights, massive spending, and the abandonment of moral values. While courage is an excellent defense against this fallacy, asking two important questions also helps. The first is to ask, “should I be afraid?” and the second is to ask, “even if I am afraid, is the claim actually true?” For example, scare tactics has been used to “support” the claim that refugees should not be allowed into the United States. In the face of this tactic, one should inquire whether or not there are grounds to be afraid of refugees and also inquire into whether or not an appeal to fear justifies banning refugees.

It is worth noting that just because something is scary or makes people afraid it does not follow that it cannot serve as legitimate evidence in a good argument. For example, the possibility of a fatal head injury from a motorcycle accident is scary but is also a good reason to wear a helmet. The challenge is sorting out “judgments” based merely on fear and judgments that involve good reasoning about scary things.

While fear makes people behave irrationally, so does anger. While anger is an emotion and not a fallacy, it does provide the fuel for the appeal to anger fallacy. This fallacy occurs when something that is intended to create anger is substituted in place of evidence for a claim. For example, a demagogue might work up a crowd’s anger at illegal migrants to get them to accept absurd claims about building a wall along a massive border.

Like scare tactics, the use of an appeal to anger distorts threat assessment. One aspect is that when people are angry, they tend to reason poorly about the likelihood and severity of a threat. For example, a crowd that is enraged against illegal migrants might greatly overestimate the likelihood that the migrants are “taking their jobs” and the extent to which they are “destroying America.” Another aspect is that the appeal to anger, in the context of public policy, is often used to “justify” policies that encroach on rights and do other harms. For example, when people are angry about a mass shooting, proposals follow to limit gun rights that had no relevance to the incident in question. As another example, the anger at illegal migrants is often used to “justify” policies that will harm the United States. As a third example, appeals to anger are often used to justify policies that would be ineffective at addressing terrorism and would do far more harm than good.

It is important to keep in mind that if a claim makes a person angry, it does not follow that the claim cannot be evidence for a conclusion. For example, a person who learns that her husband is having an affair with an underage girl would probably be very angry. But this would also serve as good evidence for the conclusion that she should report him to the police and divorce him. As another example, the fact that illegal migrants are here illegally and knowingly employed by businesses because they can be more easily exploited than American workers can make someone mad, but this can also serve as a premise in a good argument in favor of enforcing (or changing) the laws.

One defense against appeal to anger is good anger management skills. Another is to seriously inquire into whether there are grounds to be angry and whether any evidence is offered for the claim. If all that is offered is an appeal to anger, then there is no reason to accept the claim based on the appeal.

The rational assessment of threats is important for practical and moral reasons. Since society has limited resources, rationally using them requires considering the probability of threats rationally—otherwise resources are being misspent. There is also the concern about the harm of creating fear and anger that are unfounded. In addition to the psychological harm to individuals that arise from living in fear and anger, there is also the damage stereotyping, demonizing, scare tactics and appeal to anger do to society. While anger and fear can unify people, they most often unify by dividing—pitting us against them. I urge people to think through threats rather than giving in to the seductive demons of fear and anger.

When engaged in rational threat assessment, there are two main factors that need to be considered. The first is the probability of the threat. The second is the severity of the threat. These two can be combined into one sweeping question: “how likely is it that this will happen and, if it does, how bad will it be?”

Making rational decisions about dangers involves considering both factors. For example, consider the risks of going to a crowded area such as a movie theater or school. There is a high probability of being exposed to the cold virus, but it is (for most people) not a severe threat. There is a low probability that there will be a mass shooting on my campus, but it is a high severity threat.

Our survival as a species seems to have been despite our poor skills at rational threat assessment. To be specific, the worry people feel about a threat generally does not match up with the probability of the threat occurring. People seem somewhat better at assessing severity, though we often get this wrong.

One excellent example of poor threat assessment is the fear Americans have about terrorism.  Between 1975 and 2025 3,577 Americans died as the result of terrorism, which accounted for .35% of all murders in the US in that time. If you are in the United States now, your odds of being killed in such an attack is about 1 in 4 million per year. This includes all forms of terrorism, although you would now be statistically most likely to be killed by right-wing terrorists.

While being killed by terrorists in the United States is unlikely, some people are terrified by the possibility (which is, of course, the goal of terrorism). Given that an American is more likely to be killed while driving than by a terrorist, it might be wondered why people are so bad at threat assessment. The answer, in terms of feeling fear vastly out of proportion to probability, involves a cognitive bias and some classic fallacies.

People (probably) follow general rules when they estimate probabilities and the ones we use unconsciously are called heuristics. While the right way to estimate probability is to use statistical methods, people often fall victim to the availability heuristic. This is when a person unconsciously assigns a probability based on how often they think of something. While something that occurs often is likely to be thought of often, thinking of something more often does not make it more likely to occur.

After an act of terrorism, people think about terrorism more often and tend to unconsciously believe that the chance of terror attacks occurring is higher than it really is. To use a non-terrorist example, when people hear about a shark attack, they tend to think that the chances of it occurring are high—even though the probability is low (driving to the beach is much more likely to kill you). The defense against this bias is to find reliable statistical data and use that as the basis for inferences about threats—that is, think it through rather than trying to feel through it. This is, of course, difficult: people tend to regard their feelings, however unwarranted, as the best evidence—despite usually being the worst evidence.

People are also misled about probability by fallacies. One is the spotlight fallacy. The spotlight fallacy is committed when a person uncritically assumes that all (or many) members or cases of a certain type are like those that receive the most attention or coverage in the media. After an incident involving terrorists who are Muslim, media attention will focus on that fact, often leading people who are poor at reasoning to infer that most Muslims are terrorists. This is the exact sort of mistake that would occur if it were inferred that most veterans are terrorists because the media covered a terrorist who was a military veteran. If people believe that, for example, most Muslims are terrorists, then they will make incorrect inferences about the probability of a terrorist attack by Muslims in the United States. This is distinct from someone simply lying about, for example, Muslims and claiming they are terrorists because the person is a bigot or wants to exploit the fear they create.

Anecdotal evidence is another fallacy that contributes to poor inferences about the probability of a threat. This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy also occurs when someone rejects reasonable statistical data supporting a claim in favor of one example or small number of examples that go against the claim. This fallacy is like hasty generalization and a similar sort of error is committed, namely drawing an inference based on a sample that is inadequate in size relative to the conclusion. The main difference between hasty generalization and anecdotal evidence is that the fallacy anecdotal evidence involves using a story (anecdote) as the sample. Out in the wild, it can be difficult to tell whether a fallacy is a hasty generalization or anecdotal evidence, fortunately what matters is recognizing a fallacy is a fallacy even if it is not clear which one it is.

People fall victim to this fallacy because stories and anecdotes usually have more emotional and psychological impact than statistical data. This leads people to infer that what is true in an anecdote must be true of the whole population or that an anecdote justifies rejecting statistical evidence in favor of an anecdote. Not surprisingly, people most often accept this fallacy because they want to believe that what is true in the anecdote is true for the whole population.

In the case of terrorism, people use both anecdotal evidence and hasty generalization: they point to a few examples of terrorism or tell a story about a specific incident, and then draw an unwarranted conclusion about the probability of a terrorist attack occurring in the United States. For example, people point out that terrorists have masqueraded as refugees and infer that refugees in general present a major threat to the United States. Or they might tell the story about one attacker in San Bernardino who arrived in the states on a K-1 (“fiancé”) visa and make unwarranted conclusions about the danger of the entire visa system.

One last fallacy is misleading vividness. This occurs when a very small number of particularly dramatic events are taken to outweigh statistical evidence. This sort of “reasoning” is fallacious because the mere fact that an event is exceptionally vivid or dramatic does not make the event more likely to occur, especially in the face of statistical evidence to the contrary.

People often accept this sort of “reasoning” because particularly vivid or dramatic cases usually make a very strong impression on the mind. For example, mass shootings are vivid and awful, so it is hardly surprising that people often feel they are very much in danger from such attacks. Another way to look at this fallacy in the context of threats is that a person conflates the severity of a threat with its probability. That is, the worse the harm, the more likely a person feels that it will occur. But the vividness of a harm has no connection to the probability it will occur.

That said, considering the possibility of something dramatic or vivid occurring is not always fallacious. For example, a person might decide to never go sky diving because hitting the ground because of a failed parachute would be very dramatic. If he knows that, statistically, the chances of the accident happening are very low but he considers even a small risk unacceptable, then he would not be committing this fallacy. This then becomes a matter of value judgment—how much risk a person is willing to tolerate relative to the severity of the potential harm.

The defense against these fallacies is to use a proper statistical analysis as the basis for inferences about probability. As noted above, there is still the psychological problem: people tend to act on the basis on how they feel rather than what the facts show.

Such rational assessment of threats is important for both practical and moral reasons. The matter of terrorism is no exception to this.  Since society has limited resources, rationally using them requires considering the probability of threats rationally—otherwise resources are being misspent. For example, spending billions to counter an unlikely threat while spending little on major causes of harm would be irrational (if the goal is to protect people from harm). There is also the concern about the harm of creating unfounded fear. In addition to the psychological harm to individuals, there is also the damage to the social fabric. While creating unwarranted fear is useful for grifters, pundits and politicians, it is bad for the rest of us and thinking things through is a way to protect yourself from needless fear and those who wish to exploit it.  

Homo sum, humani nihil a me alienum puto.

-Terence

 

Way back in the fall of 2015, a free yoga class at the University of Ottawa was suspended due to concern it might have been cultural appropriation. A Centre official, responding to the prompting complaint, noted that many cultures, including the culture from which yoga originated, “have experienced oppression, cultural genocide and diasporas due to colonialism and western supremacy … we need to be mindful of this and how we express ourselves while practicing yoga.”  To fix this, they attempted to “rebrand” the class as “mindful stretching.” Due to issues regarding a French translation, the rebranding failed and the class was suspended.

Back then, I initially assumed it was absurd satire lampooning what was then called political correctness. It was real, but still absurd. But, as absurdities sometimes do, it provided a context for discussing a serious subject—in this case cultural appropriation.

The concept of cultural appropriation is controversial, but the idea is simple. In general terms, cultural appropriation takes place when a dominant culture takes (“appropriates”) from a marginalized culture for morally problematic reasons. For example, white college students have been accused of cultural appropriation (and worse) when they have used parts of American black culture for theme parties. Some on the left (or “the woke” as they are called by their detractors) see cultural appropriation as morally wrong. Some on the right think the idea of cultural appropriation is ridiculous and that people should just get over and forget about past oppressions. For them, the important thing is to address the cruel oppression of white, straight men—such as the President, Elon Musk, various billionaires, most CEOs, and such.

While I am still no fan of what can justly be considered performative political correctness, there are moral problems arising from cultural appropriation. One common type of cultural appropriation is intended to lampoon aspects of that culture. While comedy, as Aristotle noted, is a species of the ugly, it should not enter the realm of what is hurtful. Doing so would cease to be comedic and would be insulting mockery. An excellent (or awful) example of this would be the use of blackface by people who are not black. Naturally, specific cases would need to be given due consideration—it can be aesthetically legitimate to use the shock of apparent cultural appropriation to make a point. The 2008 film Tropic Thunder does this well.

It can be objected that lampooning is exempt from moral concerns about insulting people. It could even be argued that there is nothing wrong with engaging in insults. The challenge is making a consistent case for this that would allow the same insults and mockery of one’s own culture.

Another type of cultural appropriation is misusing symbols. For example, an underwear model dancing around in a war bonnet is not intended as lampooning but is an insult to the culture that sees a war bonnet as an honor to be earned. It would be comparable to having underwear models prancing around displaying unearned honors such as the Purple Heart, a Silver Star, or the Medal of Honor. This misuse can be unintentional—people often use cultural marks of honor as “cool accessories” without any awareness of what they mean. While people should, perhaps, do some research before borrowing from other cultures, innocent ignorance is certainly forgivable.

It could be objected that such misuse is not morally problematic since there is no real harm being done when a culture is insulted by the misuse of its symbols. This, of course, would need to be held to consistently—a person making this argument to allow the misuse of the symbols of another culture would need to accept a comparable misuse of their own sacred symbols as morally tolerable. I am not addressing the legality of this matter—although cultures do often have laws protecting their own symbols, such as military medals or religious icons.

While it would be easy to run through a multitude of cases that would be considered cultural appropriation, I prefer to focus on presenting a general principle about what would be morally problematic cultural appropriation. Given the above examples and consideration of the others that can be readily found, what seems to make appropriation inappropriate is the misuse or abuse of the cultural elements. That is, there needs to be meaningful harm inflicted by the appropriation. This misuse or abuse could be intentional (which would make it morally worse) or unintentional (which might make it an innocent error).

It could be contended that any appropriation of culture is harmful by using an analogy to trademark, patent, and copyright law. A culture could be regarded as holding the moral “trademark”, “patent” or “copyright” (as appropriate) on its cultural items and thus people who are not part of that culture would be inflicting harm by appropriating these items. This would be analogous to another company appropriating, for example, Disney’s trademarks, violating the copyrights held by Random House or the patents held by Google. Culture could be thus regarded as a property owned by members of that culture and passed down as a matter of inheritance. This would seem to make any appropriation of culture by outsiders morally problematic—although a culture could give permission for such use by intentionally sharing the culture. Those who are fond of property rights should find this argument appealing.

One way to counter the ownership argument is to note that humans are born into culture by chance and any human could be raised in any culture. As such, it could be claimed that humans have an ownership stake in all human cultures and thus are entitled to adopt culture as they see fit. The culture should, of course, be shown proper respect. This would be a form of cultural communism—which those who like strict property rights might find unappealing.

A response to this is to note that humans are also born by chance to families and any human could be designated the heir of a family, yet there are strict rules governing the inheritance of property. As such, cultural inheritance could work the same way—only the true heirs can give permission to others to use the culture. This should appeal to those who favor strict protections for inherited property.

My own inclination is that humans are the inheritors of all human culture and thus we all have a right to the cultural wealth our species has produced.  Naturally, individual ownership of specific works should be properly respected. However, as with any such great gift, it must be treated with respect and used appropriately—rather than misused through appropriation. So, cancelling the yoga class was absurd. But condemning misuse through appropriation is correct.

In Art of the Deal Donald Trump calls one of his rhetorical tools “truthful hyperbole.” He defends and praises it as “an innocent form of exaggeration — and a very effective form of promotion.” As a promoter, Trump used this technique. He now uses it as president.

Hyperbole is an extravagant overstatement that can be positive or negative in character. When describing himself and his plans, Trump makes extensive use of positive hyperbole: he is the best and every plan of his is the best. He also makes extensive use of negative hyperbole—often to a degree that crosses the line from exaggeration to fabrication. In any case, his concept of “truthful hyperbole” is worth considering.

From a logical standpoint, “truthful hyperbole” is an impossibility. This is because hyperbole is, by definition, not true.  Hyperbole is not merely a matter of using extreme language. After all, extreme language might accurately describe something. For example, describing pedophiles as horrible would be spot on. Hyperbole is a matter of exaggeration that goes beyond the facts. For example, describing Donald Trump the evilest being in all of space and time would be hyperbole. As such, hyperbole is always untrue. Because of this, the phrase “truthful hyperbole” means the same as “accurate exaggeration”, which reveals the problem.

Trump, a master of rhetoric, is right about the rhetorical value of hyperbole—it can have great psychological force. It, however, lacks logical force, as it provides no logical reason to accept a claim. Trump is right that there can be innocent exaggeration. I will now turn to the ethics of hyperbole.

Since hyperbole is (by definition) untrue, there are two main concerns. One is how far the hyperbole deviates from the truth. The other is whether exaggeration is harmless. While a hyperbolic claim is necessarily untrue, it can deviate from the truth in varying degrees. As with fish stories, there is some moral wiggle room in terms of the proximity to the truth. While there is no exact line (to require that would be to fall into the line drawing fallacy) that defines the exact boundary of morally acceptable exaggeration, some untruths surely go beyond that line. This line varies with the circumstances—the ethics of fish stories, for example, differs from the ethics of job interviews.

While hyperbole is untrue, it must have some anchor in the truth. If it does not, then it is not exaggeration but pure fabrication. This is the difference between containing some truth and being devoid of truth. Naturally, hyperbole can be mixed in with fabrication. For example, consider Trump’s claim about the 9/11 attack that “in Jersey City, N.J., where thousands and thousands of people were cheering as that building was coming down. Thousands of people were cheering.”

If Trump had claimed that some people in America celebrated the terrorist attacks on 9/11, then that is almost certainly true—there was surely at least one person who did this. If  he had claimed that dozens of people in America celebrated the 9/11 attacks and this was broadcast on TV, then this might be an exaggeration as we do not know how many people in America celebrated but it also includes a fabrication (the TV part). If he had claimed that hundreds did so, the exaggeration would be considerable. But Trump, in his usually style, claimed that thousands and thousands celebrated.  This exaggeration might be extreme. Or it might not—thousands might have celebrated in secret, although this is a wildly implausible claim. However, the claim that people were filmed celebrating in public and video existed for Trump to see is a fabrication rather an exaggeration.  

One way to help determine the ethical boundaries of hyperbole is to consider the second concern, namely whether the hyperbole (untruth) is harmless or not. Trump is right to claim there can be innocent forms of exaggeration. This can be taken as exaggeration that is morally acceptable and can be used as a basis to distinguish such hyperbole from unethical lying.

One realm in which exaggeration is innocent is storytelling. Aristotle, in the Poetics, notes that “everyone tells a story with his own addition, knowing his hearers like it.” While a lover of truth Aristotle recognized the role of untruth in good storytelling, saying that “Homer has chiefly taught other poets the art of telling lies skillfully.” The telling of tall tales that feature even extravagant extravagation is morally acceptable because the tales are intended to entertain—that is, the intention is good. In the case of exaggerating in stories to entertain the audience or a small bit of rhetorical “shine” to polish a point, the exaggeration is harmless—which makes sense if  one thinks Trump sees himself as an entertainer.

In contrast, exaggerations that have a malign intent would be morally wrong. Exaggerations that are not intended to be harmful yet prove to be so would also be problematic—but discussing the complexities of intent and consequences would take this essay to far afield.

The extent of the exaggeration would also be relevant here, the greater the exaggeration that is aimed at malign purposes or that has harmful consequences, the worse it would be morally. After all, if deviating from the truth is (generally) wrong, then deviating from it more would be worse. In the case of Trump’s claim about thousands of people celebrating on 9/11, this untruth fed into fear, racism and religious intolerance. As such, it was not an innocent exaggeration, but a malign untruth.

While the American right favors tax cuts, the left sometimes proposes tax increases. One argument advanced by the right against increasing taxes is the demotivation argument. The gist of the argument is that if their taxes are increased, the rich will become demotivated and this will have negative consequences. Since these negative consequences should be avoided, the conclusion is that taxes should not be increased.

In assessing this reasoning, there are two major points of concern. One is whether a tax increase would destroy the motivation of the upper class. The other deals with the negative consequences, their nature, their likelihood of occurring and the extent and scope of the harm. I will begin with the alleged consequences.

The alleged consequences are many and varied. One is based on the claim that the top economic class includes the top innovators of society and if they are demotivated, then there will be less innovation. This could range from there being no new social media platforms to there being no new pharmaceuticals. While this is a point of concern, this assumes that innovation arrives primarily out of the top economic class—which can be tested. While some top earners are innovators, innovation also come from the lower economic classes—such the people doing research and engineering. The idea that the rich are the innovators does match the fiction of Ayn Rand but seems to miss the way research and development usually occurs.

Another alleged consequence rests on the claim that the upper class serves as the investors who provide the capital that enables the economy to function. Since they control the capital, this is a reasonable concern. If Americans with the most money decided to reduce or stop investing, then the investment economy would need to rely on foreign capital or what could be provided by the lower classes. Since the lower classes have far less money (by definition), they would not be able to provide the funds. There are, of course, foreign investors who would happily take the place of the wealthy Americans, so the investment economy would probably still roll along. Especially since American investors might find the idea of losing out to foreign investors sufficient motivation to overcome the demotivation of a tax increase.

There is also the claim that the upper class contains the people who do the important things, like brain surgery and creating the new bubble that will be the destroy the world economy next time around. While this has some appeal, much of the important stuff is done by people who are not in the upper class. Again, the idea that the economic elite are doing all the really important stuff while the rest of  us are takers rather than makers is yet another Randian fantasy.

Fairness does, however, require that these concerns be properly investigated. If it can be shown that the upper class is as critical as its defenders claim, then my assertions can be refuted. Of course, it worth considering that much of the alleged importance of the upper class arises from the fact that it has a disproportionate share of the wealth and that it would be far less important if the distribution were not so grotesquely imbalanced. As such, a tax increase could decrease the importance of the economic elites. I will now turn to the matter of whether a tax increase would demotivate the rich.

An easy and obvious response to the claim that a relatively small tax increase would demotivate the rich is that the rest of us work jobs, innovate, invest and do important things for vastly less money than those at the top. Even if the rich paid slightly more taxes, their incomes would still vastly exceed ours. And if we can find the motivation to keep going despite our low incomes, then the rich can also do so. When I worked at a minimum wage job, I was motivated to go to work. When I was an adjunct making $16,000 a year, I was still motivated to go to work. Now that I am a professor, I am still motivated to go to work.

It could be replied that those of us in the lower classes are motivated because we need the income to survive. We need to work to buy food, medicine, shelter and so on. Those who are so well off that they do not need to work to survive, it could be claimed, also have the luxury of being demotivated by an increase in their taxes. Whereas someone who must earn her daily bread at a crushing minimum wage (or less) job must get up and go to work, the elite can allow themselves to be broken by a slight tax increase and decide to stop investing, stop innovating, and stop doing important stuff.

One reply is that it seems unlikely that the rich would be broken by a tax increase. Naturally, a crushing increase would be a different story—but the American left does not seriously suggest imposing truly crushing tax burdens on the rich. After all, crushing burdens are for the poor. Another reply is that if the current rich become demotivated and give up, there are many who would be happy to take their place—even if it means paying slightly higher taxes on a vastly increased income. So, we would just get some new rich folks to replace the demotivated slackers. The invisible hand of the market to the rescue again.

 

One popular narrative on the American right is that the West is engaged in a “clash of civilizations” with Islam. Some phrase it in terms of Islam being at war with the West. Not surprisingly, the terrorist groups that self-identify as Muslim would also like it to be a war between all of Islam and the West.

There are various psychological reasons to embrace this narrative. Seeing oneself on the side of good in an epic struggle is appealing. This provides meaning and a sense of significance often lacking in life. There is also the sickly-sweet lure of racism, bigotry and religious intolerance. These are strong motivating factors to see others as an implacable enemy—inferior in every way, yet also somehow demonically dangerous and devilishly clever.

There are also powerful motivations to get others to accept this narrative. Leaders can use it as political fuel to gain power and justify internal oppression and external violence. It also makes an excellent distraction from other problems. As such, it is no surprise that both American politicians and terrorist leaders are happy to push the West vs. Islam narrative. Doing so serves both their agendas.

While the psychology and politics of the narrative are both important, I will focus on the idea of the West being at war with Islam. One obvious starting point is to try to sort out what this might mean.

It might seem easy to define the West—this could be done by listing the usual suspects, such as the United States, France, Germany, Canada and so on. However, it can get a bit fuzzy in areas. For example, Turkey is predominantly Muslim but is part of NATO and considered by some to be part of the Western bloc. Russia is not part of the classic West but has been the target of terrorist groups. But perhaps it is possible to just go with Classic West and ignore the finer points of this war.

Establishing the war is easy. While many terrorist groups that claim to be fighting for Islam have declared open war on the West, most Muslims have not done so. As such, the West is only at war with some Muslims and not with Islam. Likewise, Islam is not at war with the West, but some Muslims are. Muslims are also at war with other Muslims—after all, Daesh killed more Muslims than it killed Westerners. The West could, of course, establish a full war on Islam on its own. For example, President Trump could get Congress to declare war on Islam. Or just start launch a vast not-war on Islam by himself.

There are  practical concerns about taking the notion of a war on Islam seriously. One concern is the fact that while the are some predominantly Muslim nations that are hostile to the United States (such as what is left of Iran), there are others that are nominal allies (such as Jordan, Pakistan, Iraq, Egypt, and Saudi Arabia) and even one that is part of NATO (Turkey). As such, a war against Islam would entail a war against these allies. That seems morally and practically problematic.

A second concern is that many friendly and neutral countries have Muslim populations. These countries might take issue with a war against their citizens. There is also the fact that the United States has Muslim citizens and waging a war on United States citizens could also prove somewhat problematic both legally and practically. Although, as the numerous apparent crimes committed show, the Trump regime appears to have little regard for the law and practical concerns.

Donald Trump has shared his various thoughts on this matter. He once considered requiring Muslims to be registered in a special database and to identify their faith. Religious freedom, one suspects, is seen as applying to only the right religions.

A third practical concern is determining the victory conditions for such a war. “Classic” war typically involves trying to get the opposing country to surrender or agree to conditions that end the war. However, a war against a religion would be inherently different. One horrific victory condition might be the elimination of Islam, either through extermination or conversion. This sort of thing has been attempted against faiths and people in the past; we now usually call this genocide.

However, such exterminations are morally wicked—to say the least. Alternatively, Muslims might be rounded up, as happened to Japanese Americans in WWII and kept in concentrated areas. In addition to being impractical, this is also morally horrifying.

Victory might be defined in less extreme ways, such as getting Islam to surrender and creating agreements to behave in ways that the West approves. This is, after all, how traditional wars end. There are, of course, many practical problems here. These would include the logistics of Islam’s surrender (since there is no unified leadership of Islam) and working out the agreements across the world. It is unclear what it would be for an entire religion to surrender.

Or perhaps there is no intention to achieve victory: the war on Islam is used to justify internal suppression of rights and liberties, to manipulate voters, to ensure that money keeps flowing into the military-security complex, and to provide pretexts for military operations. As such, the war will continue until another opponent can be found to fill the role of adversary.  The USSR once served ably in this role, but Trump seems to like Putin too much to use Russia as an enemy. China has some potential, but our economies are bound together.

One reasonable counter to the above is to insist that although the ideas of a war with Islam and a clash of civilizations are real, a more serious approach is a war with radical Islam rather than all of Islam. This narrower approach could avoid many of the above practical problems, assuming that our Muslim allies are not radicals and that our and allied Muslim citizens are (mostly) not radicals. This would enable the West to avoid having to wage war on allies and its own citizens, which would be  awkward.

While this narrowed scope is an improvement, there are still some obvious concerns. One is working out who counts as the right (or wrong) sort of radical. After all, a person can hold to a radical theology yet have no interest in harming anyone else. But perhaps “radical Islam” could be defined in terms of groups that engage in terrorist and criminal acts that also self-identity as Muslims. If this approach is taken, then there would seem to be no legitimate justification for labeling this a war on Islam or even radical Islam. It would, rather, be a conflict with terrorists.

There are some practical reasons for avoiding even the “war on radical Islam” phrasing. One is that using the phrase provides terrorist groups with propaganda: they can claim that the West is at war with Islam, rather than being engaged in conflict with terrorists who claim to operate under the banner of Islam. The second is that the use of the phrase alienates and antagonizes Muslims who are not terrorists, thus doing harm in the efforts to win allies (or at least  keep people neutral).

It might be objected that refusing to use “radical Islam” is a sign of political correctness, DEI, wokeness or cowardice. While this is a beloved talking point for some, it has no merit as serious criticism. As noted above, using the term merely serves to benefit the terrorists and antagonize potential allies. Insisting on using the term is a strategic error that is often driven by bravado, ignorance and intolerance. As such, the West should not engage in a war on Islam or even radical Islam. Fighting terrorists is, of course, another matter entirely.

While the classic anti-migrant playbook focuses on falsely accusing migrants of spreading disease, doing crimes, stealing jobs, and using resources, there is also the more recent addition of accusing migrants of being terrorists, especially Muslim migrants.  This is then used to “justify” anti-migrant actions.

On the one hand, it is tempting to dismiss this as political posturing and pandering to fear, racism and religious intolerance. On the other hand, it is worth considering legitimate worries under the posturing and the pandering. One worry is that terrorists could masquerade as refugees. Another worry is that refugees might be radicalized and become terrorists.

In politics, it is unusual for people to operate based on consistently held principles. Instead, views usually reflect how a person feels or what they think about the political value of a position. However, a proper moral assessment requires considering migration in terms of general principles and consistency.

In the case of the refugees, the general principle justifying excluding them would be something like this: it is morally acceptable to exclude groups who might include people who might pose a threat. This principle seems, in general, reasonable. After all, excluding people who might present a threat serves to protect people from harm.

Of course, this principle is incredibly broad and would justify excluding almost anyone and everyone. After all, nearly every group of people (tourists, refugees, out-of-staters, men, Christians, atheists, cat fanciers, football players, and so on) include people who might pose a threat.  While excluding everyone would increase safety, that would be absurd. As such, this general principle should be refined. For example, considering the odds that a dangerous person will be in the group, the harm such a person is likely to do, and the likely harms from excluding that group.

According to Cato institute, “A total of 237 foreign-born terrorists were responsible for 3,046 murders on US soil from 1975 through the end of 2024. The chance of a person perishing in a terrorist attack committed by a foreigner on US soil over those 50 years was about 1 in 4.6 million per year. The hazards posed by foreigners who enter in different ways vary considerably. For instance, the annual chance of being murdered in an attack committed by an illegal immigrant terrorist is zero.” Thus, arguing against immigration based on an alleged threat of terrorism is absurd. This is not to say that we should not be vigilant, just that if the goal is to protect Americans, then the resources could be better used in other ways. Such as funding health care.

It might be countered, using hyperbolic rhetoric, that if even one terrorist gets into the United States, that would be too many. While one bad thing is one too many, would it be reasonable to operate on a principle that the possibility of even one bad outcome warrants strict regulation? That would generally seem to be unreasonable. This principle would justify banning guns, peanuts, swimming pools and cars. It would also justify banning tourists and visitors from other states. After all, tourists and people from other states do bad things in states from time to time. It would also seem to justify banning birth. After all, we can be sure at least one person born in the future will be a murderer. As such, the idea of basing policy on the notion that one is too many is absurd.

There is, of course, concerns about political risk. A politician who supports allowing Muslim migrants to come to America will be savaged by the right if even a single incident happens. This, of course, would be no more reasonable than vilifying a politician who supports the Second Amendment just because a person is shot to death in their state.  But reason is usually absent in the realm of political punditry.

Another factor to consider is the harm that would be done by excluding such migrants, especially refugees. If they cannot be settled someplace, they will be condemned to live as involuntary nomads and suffer all that entails. There is also the ironic possibility that excluded refugees will become, as pundits like to say, radicalized. After all, people who are deprived of hope and are treated as pariahs tend to become a resentful and some might become terrorists. There is also the fact that banning Muslim refugees and migrants provides propaganda for terrorist groups.

Given that the risk is very small and the harm to the refugees and migrants would be significant, the moral thing to do is to allow migrants and refugees into the United States. Yes, one of them could be a terrorist. But so could a tourist. Or some American coming from another state. Or already in the state. While some right-winger might accuse me of thus supporting open borders, nothing I say entails that. Refugees and migrants need to be properly vetted, especially after our attack on Iran. While I am not an expert on terrorism, I would expect Iran to step up its efforts against the United States.

In addition to utilitarian calculation, an argument can also be based on moral duties to others, even when acting on such a duty involves risk. In terms of religious-based ethics, a standard principle is to love thy neighbor as thyself, which requires helping refugees and migrants even at a slight risk. There is also the golden rule.

As a closing point, we Americans love to make claims about the moral superiority and exceptionalism of our country. Talk is cheap, so if we want to prove our alleged superiority and exceptionalism, we must act in an exceptional way. Excluding people and refusing to help them because we are afraid shows a lack of charity, compassion and courage. This is not what an exceptional nation would do.

Thanks to The Time Machine, Dr. Who and Back to the Future, it is easy to imagine what time travel might look like: people get into a machine, cool stuff happens (coolness is proportional to the special effects budget) and the machine vanishes. It then reappears in the past or the future (without all that tedious mucking about in the time between now and then).

Thanks to philosophers, science fiction writers and scientists, there are enough problems and paradoxes regarding time travel to keep thinkers pontificating until after the end of time. I will not endeavor to solve any of these problems or paradoxes here. Rather, I will add another time travel scenario to the stack.

Imagine a human research team has found a time gate on a desolate alien world. The scientists have figured out how to use the gate, at least well enough to send people back and forth through time. They also learned that the gate compensates for motion of the planet in space, thus preventing potentially fatal displacements.

As is always the case, there are nefarious beings who wish to seize the gate for their own diabolical purposes. Perhaps they want to go and change the timeline so that rather than one good Terminator movie, there are just very bad terminator movies in the new timeline. Or perhaps that want to do even worse things

Unfortunately for the good guys, the small expedition has only one trained soldier, Sergeant Vasquez, and she has limited combat gear. What they need is an army, but all they have is a time gate and one soldier.

The scientists consider using the gate to go far back in time in the hopes of recruiting aid from the original inhabitants of the world. Obvious objections are raised against this proposal, such as the fear the original inhabitants might be worse than the current foe or that the time travelers might be arrested and locked up.

Just as all seemed lost, the team historian recalled an ancient marketing slogan: “Army of One.” He realized that this marketing tool could be made into a useful reality. The time gate could be used to multiply the soldier into a true army of one. The team philosopher raised the objection that this sort of thing should not be possible, since it would require that a particular being, namely Vasquez, be multiply located: she would be in different places at the same time. That sort of madness, the philosopher pointed out, was something only metaphysical universals could pull off. One of the scientists pointed out that they had used the gate to send things back and forth in time, which resulted in just that sort of multiple location. After all, a can of soda sent back in time twenty days would be a certain distance from that same soda of twenty days ago. So, multiple location was obviously something that particulars could do—otherwise time travel would be impossible. Which it clearly was not. In this story.

The team philosopher, fuming a bit, raised the objection that this was all well and good with cans of soda, because they were not people. Having the same person multiply located would presumably do irreversible damage to most theories of personal identity. The team HR expert cleared her throat and brought up the practical matter of paychecks, benefits, insurance and other such concerns. Vasquez’s husband was caught smiling a mysterious smile, which he quickly wiped off his face when he noticed other team members noticing. The philosopher then played a final card: if we had sent Vasquez back repeatedly in time, we’d have our army of one right now. I don’t see that army. So, it can’t work. Because it didn’t.

Vasquez, a practical soldier, settled the matter. She told the head scientist to set the gate to take her well back before the expedition arrived.  She would then use the gate to “meet herself” repeatedly until she had a big enough army to wipe out the invaders.

As she headed towards the gate with her gear, she said “I’ll go hide someplace so you won’t see me. Then I’ll ambush the nefarious invaders. We can sort things out afterwards.” The philosopher muttered but secretly thought it was a pretty good idea.

The team members were very worried when the nefarious invaders arrived but were very glad to see the army of Vasquez rush from hiding to shoot the hell out of them.  After cleaning up the mess, one of the Vasquez asked “so what do I do now? There is an army of me and a couple of me got killed in the fight. Do I try to sort it out by going back through the gate one me at a time or what?”

The HR expert looked very worried—it had been great when the army of one showed up, but the budget would not cover the entire army. But, the expert thought, Vasquez is still technically and legally one person. She could make it work…unless Vasquez got mad enough to shoot her.