This essay continues my guide on how to start a War on X. In the previous essay, I looked at the advantages and disadvantages of lying and this essay will focus on using hyperbole and the Straw Man fallacy. Hyperbole is a rhetorical device involving exaggeration, typically to make something appear far worse or much better than it really is. While hyperbole is a form of lying, it is an exaggeration rather than a complete fabrication. For example, if a person does not catch any fish and say they “caught a whopper”, then they are just lying. If they caught a small fish and called it a whopper, they are using hyperbole.  While there can be debate about the boundary between hyperbole and other forms of lying, this distinction is not as important as the distinction between the truth and a lie.  Hyperbole can be used for benign purposes, such as in comedy. But it can also be weaponized to help start a war. Hyperbole is often used in creating a straw man fallacy.

The Straw Man fallacy is committed when one ignores a claim or argument and substitutes a distorted, exaggerated, or misrepresented version of that claim or argument. This sort of “reasoning” has the following pattern:

 

Premise 1: Person A makes claim or argument X.

Premise 2: Person B presents Y (which is a distorted version of X).

Premise 3: Person B attacks Y.

Conclusion:  Therefore, X is false/incorrect/flawed.

 

This sort of “reasoning” is fallacious because attacking a distorted version of a claim or argument does not constitute a criticism of the position itself. A Straw Man can be effective because people often do not know the real claim or argument being attacked. The fallacy is especially effective when the straw person matches the audience’s biases or stereotypes, they will feel that the distorted version is the real version and accept it.

While this fallacy is generally aimed at an audience, it can be self-inflicted: a person can unwittingly make a Straw Man out of a claim or argument. This can be done in error (perhaps due to ignorance) or due to the influence of prejudices and biases.

Straw man attacks often make use of an appeal to an unknown fact. This usually involves claiming to know the “real reason” a person or group believes the straw claim or argument. This “reason” is usually presented as a wicked motivation.

The defense against a Straw Man, self-inflicted or not, is to take care to get a person’s claim or argument right. This involves applying the principle of charity and the principle of plausibility.

Following the principle of charity requires interpreting claims in the best possible light and reconstructing arguments to make them as strong as possible. There are three reasons to follow the principle. The first is that doing so is ethical. The second is that doing so avoids committing the straw man fallacy. The third is that the criticism of the best and strongest versions of a claim or argument also addresses the lesser versions.

The principle of charity must be tempered by the principle of plausibility: claims must be interpreted, and arguments reconstructed in a way that matches what is known about the source and in accord with the context. Obviously, if you are using a Straw Man in your war, you will want to ignore these principles. Instead, you will want to be uncharitable and present the worst possible interpretation of your target. This should be tempered by a distorted version of the principle of plausibility: your distortion should be plausible (or at least appealing) to your target audience.

Hyperbole and the Straw Man fallacy have the following advantages: 

 

  • The truth is what it is, but hyperbole/straw man allows you to “modify” the truth.
  • Engaging an actual claim or argument can be difficult, exaggeration and distortion are easy.
  • Hyperbole/straw man allows you to have a grain of truth.
  • People tend to be more forgiving of hyperbole/strawman compared to utter fabrications.
  • Even if someone does not believe the hype, they can be influenced by it.

 

 

If you limit yourself to the truth or what your target really said, you are stuck with what is. If you are trying to create a WOX, reality will probably be of little or no use to you. But you can tailor your hyperbole or straw man to have maximum impact on your audience. You can, for example, make it fit perfectly with what they fear. If your target makes reasonable claims and has a good argument, engaging these would be hard. Exaggeration and distortion make fighting easier. To use the obvious analogy, it is the difference between fighting the person and punching a dummy dressed up like the person. While useful, hyperbole and straw man do have some disadvantages. Since hyperbole/straw man is not a complete fabrication, you have a grain of truth to work with. This allows you a degree of plausible deniability and there is also the fact that people seem more tolerant of exaggeration and distortion compared to complete fabrications. Interestingly, even if people do not “buy the hype” they can be influenced by it—this is something advertisers make use of. You just need to be careful not to over-hype or distort to the point that even your target audience will not buy it.

As with lies, there are some disadvantages to hyperbole/straw man:

 

  • As with lying, hyperbole/strawman can be debunked.
  • Most religions and moralities condemn lying, even hyperbole.
  • Requires a grain of truth.

 

The general points I made about lying in the previous essay also apply here to hyperbole/straw man. It is worth noting that hyperbole/straw man does give you a degree of plausible deniability. If you are debunked, you can claim you just got things wrong by accident or were being dramatic. Hyperbole/straw man also decreases the chances that your targets will doubt you. If they do make a cursory check of your distortion, they might think that it is confirmed by the grain of truth it is built on. As such, hyperbole/straw man can be better than outright fabrication, at least in some contexts. It does, after all, require at least a fragment of truth.

As an example, consider the change from “Christmas Break” to “Winter Break” or “Holiday Break” by many schools. This was done for the obvious reason that not all students, faculty and staff are Christians who celebrate Christmas. There is also the fact that the break includes holidays other than Christmas, such as New Year’s and that it includes days before and after Christmas. While this change was obviously not an attack on Christmas, it can easily be made into a straw man. A good way to do this is to claim that the “real reason” that the name was changed was because “they” are attacking Christmas and want to cancel it. You can also throw in some claims about “the libs” and “the woke mob” attacking Christianity. If you want to do some racism or sexism, there is also the option of using a dog whistle (the subject of the final essay in this series).  In the next essay we will look at how to use Incomplete Evidence to start the war.

Bill O’Reilly deserves the credit for creating the modern American conception of the War on Christmas. While O’Reilly is no longer a major player, the right has persisted in its odd insistence that Christmas is under attack. That this war has been debunked repeatedly does not matter. While I will discuss some aspects of this war, my interest is with the general methods used to craft and propagate a fictional narrative of a war on X (whatever X might be). I will call this WOX.

If you want to start a WOX, you begin by selecting the X for your new war. You will want to select something your target audience values and it will ideally be something they already fear might be threatened by people they fear and dislike. But you can generate fear on your own if you need to. Given the patent absurdity of claiming there is a war on Christmas, you could start a culture war around almost anything. Fortunately, there are limits to what people will accept; even Trump and Fox News failed at getting their base to believe that there was a war on Thanksgiving. That proved a war too far. But perhaps some pundit or politician will make it stick next time.

While a WOX will, obviously, tend to use “war on” as its defining phrase, this is optional. All you need to do is say “X is under attack” to use the methods I will be discussing. For example, you might prefer to speak of the attacks of the woke mobs on manliness without claiming there is a WWOM (Woke War on Manliness). Given that there are so many Wars on This and Wars on That these days, people might be suffering from battle fatigue when it comes to that phrase. But give it a try and see how your audience reacts. Now let us look at the time-honored tradition of starting a war with lies.

A common way to use lies to argue that there is a War on X is to make up “examples” of acts of war. These “examples” are then used in an Argument by Example to “prove” that the war is occurring. Not surprisingly, an argument by example is an argument in which a claim is supported by providing examples. Although people generally present arguments by example in an informal manner, they have the following logical form:

 

              Premise 1: Example 1 is an example that supports claim P.

              Premise n: Example n is an example that supports claim P.

              Conclusion: Claim P is true.

 

The form used to argue for a WOX would look like this:

 

Premise 1: Example 1 is an example of a War on X.

              Premise n: Example n is an example of a War on X.

              Conclusion: There is a War on X.

 

The strength of the argument depends on four factors First, the more examples, the stronger the argument. Second, the more relevant the examples, the stronger the argument. Third, the examples must be specific and clearly identified.

Fourth, counterexamples must be considered. A counterexample is an example that counts against the claim. One way to look at a counter example is that it is an example that supports the denial of the conclusion being argued for. The more counterexamples and the more relevant they are, the weaker the argument. 

When lying to “prove” a WOX is occurring, you will want to mimic a strong Argument by Example. You will want to have many made up examples that are crafted to appear relevant. While using specific made-up examples might seem risky because vague lies are harder to disprove, this can create the illusion of credibility and, as we will see, you can probably get away with it. You might want to use counter-examples to create the illusion of reasonability but be careful to refute them. Now, on to lying.

Lying has many advantages:

 

  • The truth is what it is, a lie is whatever you want it to be.
  • Determining the truth can require effort, lying is usually easy.

 

If you limit yourself to the truth, you are stuck with what actually  is. If you are trying to create a WOX, the truth will probably be of little or no use. But you can craft whatever lie you need, and you can tailor it to have maximum impact on your audience. You can, for example, make it fit perfectly with what they fear. Sorting out what is true can be difficult, but simply making things up can be easy. Lying is, however, not without disadvantages:

 

  • Lies can be debunked.
  • Most religions and moralities condemn lying.

 

From a practical standpoint, one worry is that your lies can be debunked and exposed. There are fact checkers that expose lies and an army of leftists on YouTube who spend countless hours creating videos to debunk and expose lies. As such, if your lies gain attention, they will be exposed in short order. Fortunately, this is usually not a problem. First, your target audience is unlikely to critically engage your claims. Even if they do have some doubt, there has been a systematic effort over the years to undermine expertise and to sow distrust of the mainstream media, academics, and others who are likely to expose your lies. As such, they are unlikely to trust these sources. Conveniently enough, if experts and the media put pressure on you, your audience is likely to double down. Hence, far from harming your WOX, these attacks can strengthen the war.

Second, some of your audience will be onboard with your lies. This might be because they want to get in on the WOX for their own reasons or because they believe they hear a dog whistle (more on this in a future essay). Or they might believe that your lies serve a higher purpose or greater truth, which takes us to the matter of religious and moral condemnation.

As noted above, most religions and moralities condemn lies. Christianity, for example, often casts the devil as the Prince of Lies and has a commandment against bearing false witness. Everyday ethics and moral theories alike usually present lying as wrong or at least as problematic. For example, Kant takes lying to always be wrong. As such, you might have concerns about lying. Fortunately, there are ways around this. One way is simply rejecting these aspects of religion and morality. Another way is to avail yourself of established loopholes and workarounds. With some exceptions, religions and moral theories do allow for justified lying. One could appeal to the popular view that the end justifies the means, or you could assuage any vestiges of a conscience by a sophisticated utilitarian analysis of your lies and the good they will do (you). The truth might set you free but lies are easy. Good luck with your WOX.

 

Following our good dead friend Aristotle, democracy is rule of the people (demos). Once a democratic form of government is chosen, then there is the matter of sorting out which people will do the ruling and how they will be selected. In the United States (which is technically a republic) a practical issue of democracy is determining who gets to vote. Those familiar with United States history know that the categories of people who can vote has grown and shrunk over time.

When the United States was founded, voting was limited to white male landowners 21 or older.  In 1868 the 14th Amendment granted full citizenship and voting rights to all men born or naturalized in the United States.  In 1870 the 15th Amendment eliminated some of the racial barriers to voting, but many states used various tactics to suppress voters. In 1920 the 19th Amendment granted women the right to vote and in 1924 the right was extended to native Americans. In 1971 the 26th Amendment lowered the voting age to 18. Republicans have engaged in nationwide effort to make it more difficult to vote and “justify” this by appealing to their big lie of widespread voter fraud in the 2020 election. But there is an interesting philosophical issue here, which is the matter of deciding who should have the right to vote.

Intuitively, there should be limits on who can vote in specific elections. To illustrate, it would be odd to claim that citizens of Maine should be able to vote to determine the governor of Florida or that United States citizens should vote for the mayor of Moscow. There is also the intuitively appealing exclusion of some people based on age. For example, few would argue that 1 year old infants should have the right to vote. But mere intuitions are not enough, what is needed is a principle or set of principles to determine who should be able to vote.

One approach to voting is to limit it based on the principle of status. That is, voting should be restricted to a certain set of people to confer status on them and deny it to others. When non-whites and women were excluded from voting, one reason was a demonstration of the hierarchy in the United States: it was one more sign showing who mattered and who mattered much less. While this provides a practical principle for deciding who can vote, it seems difficult to provide a moral justification for this. I certainly will not defend it and will leave it to the sexists to defend exclusion based on sex and the racists to argue for exclusion based on race. And so on for other such hierarchy-based exclusions.

A similar approach to voting that tends to make the same divisions uses the principle of maintaining the status quo. Going back to 1776, one reason to allow only white men who own land to vote is that they will tend to vote in ways that favor white men who own land. If other people are permitted to vote, they might vote in their own interest, which need not be advantageous to maintaining the status quo. This is one reason why Republicans support laws that disproportionately disenfranchise minority voters: these voters tend to vote for Democrats and excluding them will help maintain the status quo or even, as they would see it, improve it. However, the principle of excluding people to maintain the status quo or advantage the included at the expense of the excluded is morally difficult to justify. I will not defend it, leaving this task to the Republicans and their supporters.

On the face of it, an appealing moral principle for determining who can vote is that those affected by the results of the voting should have the right to vote in that election. For example, as a citizen of Florida and a state employee I am affected by the election of the governor of Florida. As such, this would warrant my participation in the election. However, this principle can break down very quickly.

The principle of affect seems far too broad in that it, intuitively, would allow people to vote who probably should not have that right. To use a somewhat silly example, infants are affected by elections, but it would be absurd for them to have the right to vote. To use a somewhat less silly example, adults around the world are affected by the United States presidential election, but it would seem absurd for them all to vote in that election. Naturally, this could be disputed, and one could defend this principle of affect. Despite its obvious flaw, the principle does point us in the right direction; we just need something that would narrow the scope sufficiently. One option to keep the principle of affect is to modify it and add an appropriate qualifier or three.

Obviously enough, one could go with various practical solutions that we already use. For example, voters could be included or excluded by their geographic location within the relevant political boundaries. Of course, this leads to questions about how these boundaries should be drawn. This is relevant to such matters as gerrymandering and other concerns about manipulating the inclusion and exclusion of voters in elections.  While perhaps difficult to implement, the idea of boundaries set by how a person is affected rather than by geography does have considerable moral appeal—after all, it seems intuitively plausible that a person should have a degree of choice in matters that meaningfully affect them. Sorting out all this goes far beyond the scope of a simple essay, but this seems like a good starting point for additional consideration about voting.

In my previous essay I introduced the notion of using the notion of essential properties to address the question of whether James Bond must be a white man. I ran through this rather quickly and want to expand on it here.

As noted, an essential property (to steal from Aristotle) is a property that an entity must have. In contrast an accidental property is one that it does have but could lack. As I tell my students, accidental properties are not just properties from accidents, like the dent in a fender.

One way to look at essential properties is that if a being loses an essential property, it ceases to be. In effect, the change of property destroys it, although a new entity can arise. To use a simple example, it is essential to a triangle that it be three-sided. If another side is added, the triangle is no more. But the new entity could be a square. Of course, one could deny that the triangle is destroyed and instead take it as changing into a square. It all depends on how the identity of a being is determined.

Continuing the triangle example, the size and color of a triangle are accidental properties.  A red triangle that is painted blue remains a triangle, although it is now blue. But one could look at the object in terms of being a red object. In that case, changing the color would mean that it was no longer a red object, but a blue object. Turning back to James Bond and his color, he has always been a white man.

Making Bond a black man would change many of his established properties and one can obviously say that he would no longer be white Bond. But this could be seen as analogous to changing the color of a triangle: just as a red triangle painted blue is still a triangle, changing Bond from a white to a black man by a change of actors does not entail that is no longer Bond. Likewise, one might claim, for changing Bond to a woman via a change of actor.

As noted in the previous essay, the actors who have played Bond have been different in many ways, yet they are all accepted as Bond. As such, there are clearly many properties that Bond has accidentally. They can change with the actors while the character is still Bond. One advantage of a fictional character is, of course, that the author can simply decide on the essential properties when they create the metaphysics for their fictional world. For example, in fantasy settings an author might decide that a being is its soul and thus can undergo any number of bodily alterations (such as through being reincarnated or polymorphed) and still be the same being. If Bond was in such a world, all a being would need to be Bond would be to be the Bond soul. This soul could inhabit a black male body or even a dragon and still be Bond. Dragon Bond could make a great anime.

But, of course, the creator of Bond did not specify the metaphysics of his world, so we would need to speculate using various metaphysical theories about our world.  The question is: would a person changing their race or gender result in the person ceasing to be that person, just as changing the sides of a triangle would make it cease to be a triangle? Since Bond is a fictional character, there is the option to abandon metaphysics and make use of other domains to settle the matter of Bond identity. One easy solution is to go with the legal option.

Bond is an intellectual property, and this means that you and I cannot create and sell Bond books or films. As such, there is a legal definition of what counts as James Bond, and this can be tested by trying to see what will get you sued by the owner of James Bond. Closely related to this the Bond brand; this can change considerably and still be the Bond brand. Of course, these legal and branding matters are not very interesting from a philosophical perspective, and they are best suited for the courts and marketing departments. So I will now turn to aesthetics.

One easy solution is that Bond is whoever the creator says Bond is; but since the creator is dead, we cannot determine what he would think about re-writing Bond as someone other than a white man. One could, of course, go back to the legal argument and assert that whoever owns Bond has the right to decide who Bond is.

Another approach is to use the social conception: a character’s identity is based on the acceptance of the fans. As such, if the fans accept Bond as being someone other than a white man, then that is Bond. After all, Bond is a fictional character who exists in the minds of his creator and his audience. Since his creator is dead, Bond now exists in the minds of the audience; so perhaps it is a case of majority acceptance, a sort of aesthetic democracy. Bond is whom most fans say is Bond. Or one could take the approach that Bond is whoever the individual audience member accepts as Bond; a case of Bond subjectivity. Since Bond is fictional, this is appealing. As such, it would be up to you whether your Bond can be anyone other than a white man. A person’s decision would say quite a bit about them. While some might be tempted to assume that anyone who believes that Bond must be a white man is thus a racist or sexist, that would be a mistake. There can be non-sexist and non-racist reasons to believe this. There are, of course, also sexist and racist reasons to believe this.  As a metaphysician and a gamer, I am onboard with Bond variants that are still Bond. But I can understand why those who have different metaphysics (or none at all) would have differing views.  

Cancer killed my dad this past May. When I heard Charlie Kirk had been killed, I understood what those who loved him must be feeling: a jagged emptiness that fills with pain and sadness. My dad and I spoke every week but now each Sunday I wait for a call that will never come. Charlie’s family is now waiting for a call that will never come and a father who will never return. But I have fifty-nine years of memories of my father.  Charlie’s kids were robbed of these memories to be; they will have but vague memory of him when they grow up. This is one of the many reasons I am saddened by any death; each of us is loved and death robs those who love us. To rejoice in death is to curse love, whether it is the death of Charlie Kirk or the slaying of people in a boat off the coast of Venezuela. I think people should remain silent rather than rejoice in death but will say more about this below.

After Kirk’s death, some rejoiced on the internet. Others condemned the murder but asserted Kirk was a terrible person. In response, the Trump administration and some state governments launched retaliation and intimidation campaigns. Vice President Vance (and fellow Buckeye philosopher) urged people to report Kirk’s critics to their employers to get them fired. Like most on the right, Vance once professed to be a champion of free speech and once said “We may disagree with your views, but we will fight to defend your right to offer it in the public square.”  Vance and others on the right also spent years condemning what they called “cancel culture.” Their recent actions help to further confirm that they are not champions of free speech and that they are enthusiastic about creating “cancel culture” when they are doing the cancelling.  

The latest efforts to suppress and punish expression are being made in Kirk’s name. This is ironic because Kirk has been lauded as a champion of free expression, with his supporters pointing out how he went to campuses to debate college students who disagreed with him. That this earned the hatred of Nick Fuentes and led to the Groyper War was certainly a point in his favor. His critics point out that he created a professor watchlist that results in death threats, harassment and efforts to get professors fired. Some might also contend that his motivation to debate college students was to get clips to use as propaganda and to recruit for his cause. To be fair and balanced, while many on the right clearly reject the right of free expression, there seem to be those who are true believers.

Attorney General Pam Bondi (who is from my adopted state of Florida) recently said the Justice Department “will absolutely target you, go after you, if you are targeting anyone with hate speech.” This led to pushback from a few people on the right, including Tucker Carlson. These critics embrace the idea of Kirk as a free-speech champion, asserting that he would have objected vehemently to Bondi. Those more cynical than I might claim that these right-wing critics of Bondi are worried that she might go after them. After all, the right wing is divided and there were credible reasons to believe that Kirk’s killer might have been to the right of Kirk, such as being a Groyper. That said, it is unlikely that Bondi really meant what she said. Otherwise, as some would point out, she would have to go after her boss. I agree with Carlson’s criticism of Bondi but what is my theory of free expression given my claim that people should not rejoice in deaths?

While I support free expression, I have the uncontroversial belief that there are some normative limits that should be followed. The weakest limit is that of etiquette. This is composed of the norms of politeness, such as what fork to use or how to address one’s professor. While this is a matter of convention (we make these up), it also forms some of the oil that greases the social wheels and plays an important role in keeping them running smoothly. They are also the way we show respect for one another. Following the norms of etiquette serve these practical purposes and we should be cautious of breaking them and consider the harm that might be done by doing so.

While the norms of etiquette are mere conventions that vary between people, most do accept that rejoicing in the death of another is impolite. As such, one should think before breaking that norm (if one accepts it). That said, etiquette (civility) can be weaponized to silence people by equating criticism with being uncivil. And people can make themselves unworthy of respect by being terrible people. But merely being rude is a minor offense and the consequences should be proportional to that offense. One can even argue that there should be no consequences at all.

I strive to be unfailingly polite even in the face of rudeness and provocation. One reason is a matter of principle in that I think that even terrible people deserve some basic respect simply for being people.  And everyone has bad days, so I give people the benefit of the doubt if they are rude to me. Another reason is that I learned that civility can be justly weaponized: remaining unfailingly polite unsettles some people when they are being rude or hateful, occasionally pushing them to reconsider their words or actions.

A much stronger realm of norms is that of ethics, although there are many thinkers who argue for moral nihilism, subjectivism, or relativism. As a practical matter, I usually follow J.S. Mill’s principle of harm when it comes to the ethics of expression and ask whether the expression would cause meaningful harm. The usual example of immoral expression is, of course, yelling “fire” in a theater when there is no fire and causing a panic. But there are obviously degrees of harm, and specific cases can be debated. For example, saying mean things about a person could cause them discomfort, but this would create far less harm then doxxing them on social media and calling for them to be fired.

Because I accept objective morality, my expression is governed by principles: I consider what harm I might cause to others and restrict my expression accordingly. This is, obviously, how I regulate all my actions. Or try to. Being a philosopher, I recognize that there are many other views of ethics that differ from my own and I am wary of imposing my ethics on others. That said, the principle of harm seems to be a good general guide about what a person should say. In the case of death, rejoicing in the death of another seems to be morally wrong because of the harm it can do to others and, perhaps, one’s own character. As such, I do not rejoice in the death of others even when I think they were wicked or even deserved their fate. Because of this, I think people (morally) should not express joy in the death of others. But they should be allowed to do so, because the harm caused does not rise to the level that requires significant consequences. The times being what they are, I must make it clear that I condemn the murder of Kirk and would prefer a better world without as much murder. 

It could be argued that such an expression shows bad character and would, morally, justify firing a person. But this would require adequate proof of bad character that would be relevant to their occupation. This would involve extensive evidence of egregious and consistent wickedness relevant to their profession. Social media is a trap for people: it creates in us the feeling that we must respond instantly in the hopes of getting attention and enables us to do so in a public manner. As such, we often get to see the worst thing a person might ever say but would not say if they reflected on it. Because of this, firing someone because of a single post would almost never be justified. We have all thought awful things that are not who we really are, and we should forgive each other for those moments, even if we say them out loud. Naturally, if there is adequate evidence of persistent and egregious wickedness relevant to a person’s job, then firing them could be morally justified. For example, if a leader persistently expressed their lack of concern about war crimes, then they should be fired.  But this principle should (morally) be applied consistently. But morality is distinct from law.

The third, and usually the most consequential, normative area is the law. While the law is, like etiquette, just made up, it has serious consequences ranging from fines to death. The United States government is, of course, supposed to be limited by the 1st Amendment. While the text of the amendment is clear, it does allow for debate about what is and is not protected and it obviously depends on the whims of those in power.  But from a moral standpoint, this right should be respected, and the state should only use its coercive power in accord with the amendment. As such, people should not be subject to the coercive power 0f the state merely for expressing views the rulers dislike, even if they express approval of Kirk’s murder.

It is also worth pointing out that the amendment is a right against the state alone. It does not apply to private entities like employers. As such, an employer could fire someone at any time in response to (or retaliation against) something they said on social media. Even employers who have professed a love of free speech can and will fire people. People often mistakenly believe that they have a general legal right to free expression but some learn that our employers have more power over us than the state in terms of what we are allowed to express. So, a business can legally fire someone for posting they were glad Kirk was murdered. They could also fire someone for expressing their sadness at Kirk’s death, although that would obviously strike most people as an odd thing to do. As I argued back when “the left” was cancelling people and getting them fired, employers should (morally) only fire people if their offense merits doing so. This is based on the principle that the punishment for an offense should be proportional to the offense. For most people being fired would be devastating. Which is, one might suspect, why many free speech warriors on the right are enthusiastically embracing cancel culture to intimidate and harm those whose views they disagree with. They are warriors against free speech, not for it. 

Since his creation, James Bond has been a white man. Much to the delight of some and to the horror of others, there were serious plans to have a black actor play James Bond. There has even been some talk about having a female James Bond. While racist and sexist reasons abound to oppose such changes, are there good reasons for James Bond to always be a white man? Before getting into this discussion, I will first look at the matter of the 007.

While James Bond has been known as 007, this is his agent designation and there are other 00 agents.  This is like the number used by an athlete on a team. As such, while James Bond has been 007, another person could replace him and get that number, just the person who was 23 on a baseball team could retire and someone else could get that number (although teams do retire numbers). Within the James Bond universe, it would make sense for someone who is not a white man to get the 007. This could occur for any number of in universe reasons, most obviously that James Bond is not immortal and would eventually be too old or dead to remain 007. From an aesthetic standpoint, it would be interesting to see a Bond timeline in which time mattered, a Bond world in which he grew old, and a new agent took his place. This would have the benefit of keeping Bond relevant to today while also maintaining (in universe) the old Bond. There is, of course, the obvious financial risk: having a new 007 who is not James Bond can be seen as analogous to replacing a star athlete with a new person who gets their number. There is the risk of losing the drawing power. But my concern is with the more interesting matter of whether James Bond must be a white man, so I will leave the money worries to the branding gurus.

One obvious fact about the Bond of the movies is that different actors have played the character. While there are strong opinions about the best Bond, there was little debate about whether a new white man should take the role when the previous Bond aged out of the role or left for other reasons. The actors who played Bond were (in general) accepted as at least adequate for the role and there was no debate about whether the character was James Bond despite the change in actors. That is, there is no general issue with a new actor playing the role. There was also, obviously enough, no effort to explain in the Bond universe the change in Bond’s appearance. I mention this because of another famous character from United Kingdom fiction, Dr. Who. When Dr. Who began, the actor playing the doctor was already old and they ran into the problem of age. They hit on a brilliant solution: Dr. Who regenerates and radically changes appearance, though remaining the same person. This gives the show an interesting feature: continuity of character through changes of actors with an in-universe explanation.

While Bond movies do feature gadgets and plots that border or even cross into science fiction (consider Moonraker), it is unlikely that the Bond cinematic universe would allow for such science fiction devices as alternative realities, such as in Marvel’s What If…? As such, the various Bonds are not explained in terms of being alternative or variant Bonds; they are all the James Bond. Now, if Bond can remain Bond despite the changes of actors, then it would seem that he would remain Bond even if he were played by a non-white actor. After all, if switching from Sean Connery did not mean that Bond was no longer Bond, then changing his race should not do that either. After all, the actors that played Bond are different people, with significant differences in appearance, mannerisms, and voice. Having a black actor, for example, would just be another change of appearance.  It would also seem to follow that having a female actor play Bond would also make as much sense; it would just be another change in appearance. But one could attempt to argue that it is essential to Bond that he be a white man. This, of course, gets us into the notion of essential properties.

In philosophy, an essential property (to steal from Aristotle) is a property that an entity must have or cease to be that thing. In contrast an accidental property is one that it does have but could lack and still remain that thing. To use a simple example, it is essential to a triangle that it be three-sided. It must have three sides to be a triangle. But the size and color of a triangle are accidental properties; they can change, and it will still remain a triangle. So, the relevant issue here is whether being a white man is essential to being James Bond or merely accidental. Given all the changes in actors over the years, there are clearly many properties that Bond has accidentally as they can change with the actors while the character is still Bond. One advantage of a fictional character is, of course, that the author can simply decide on the essential properties when they create the metaphysics for their fictional world. But, of course, the creator of Bond did not do that, so we need to speculate using various metaphysical theories about our world. That is, would a person changing their race or gender result in the person ceasing to be, just as changing the sides of a triangle would make it cease to be a triangle? On the face of it, while such changes would clearly alter the person, they would seem to retain their personal identity. If this is true, then James Bond need not be a white man. But more will be said in the next essay.

In philosophy, the classic problem of universals is determining in virtue of what (If anything) a particular individual is a member of a category. Some philosophers, such as Armstrong, Plato, and myself believe that at least some categories are metaphysically real and are thus as realists about properties. For example, if mass is a real quality, then entities with mass are grouped into that category in virtue of possessing that metaphysical property. A realist does not need to accept that all categories have a metaphysical foundation. For example, I take race and gender as human made ideas rather than being grounded in metaphysical entities. To use a less contentious example, I also take the property of being a citizen of the United States to be a social construct rather than a real metaphysical entity.

When talking about particulars being in categories, there are two general ways to view this. One is a matter of grouping: the property is what puts an entity into that category. There is also the matter of the entity being what it is in terms of the quality. These two can amount to the same thing but can be distinguished conceptually. For example, there is what it is for a green thing to be in the category of green things and the matter of what it to be green. Again, one might determine that these amount to the same thing. To illustrate, Plato would (probably) say that the Form of Beauty groups beautiful things into that category and makes a beautiful individual beautiful. I have not, of course, gotten into the epistemic aspect of this, such as how we know that something is in a category. I have also not addressed alternatives to metaphysical realism about properties. But for the sake of what follows, I will assume properties are real so that I can get on to discussing substances and substrata.

Philosophers such as Aristotle and Descartes accepted the existence of substance and define it as something that exists in a manner that does not depend on other entities. This can quickly get messy, but to keep it simple think of an everyday object like an apple.  It can exist as a distinct entity, apart from other things. Yes, I know that apples come from trees and depend on the existence of dimensions such as time; but I am keeping it simple here. The ability for an apple to exist on its own contrasts with that of its properties. To illustrate, I can buy an apple at Publix and take it home. I can take a bite out of it, taking that piece from the apple. But no matter how I bite it, I cannot bite away the property of mass or shape and have just mass or shape in my stomach. As such, properties are taken by many philosophers as not being substances. You can have an apple in your stomach, but you cannot have just apple shape or apple mass in your stomach. Because of this sort of thinking, philosophers such as John Locke have reluctantly accepted substance, calling it “something I know not what.” Others, of course, have rejected substance. If you accept properties as existing as part of substances, there is then the matter of whether the substance is just a bundle of properties or if there is another metaphysical entity that “binds” the properties together into a substance.

While bundle theorists have the advantage of metaphysical economy, they face the problem of explaining what connects the properties together into a single entity. Substrata theorists, like me, claim that there is a second type of metaphysical entity, the substrata. It has the function of binding properties together to form objects. While I have argued in favor of substrata at length, teaching the subject again in my Metaphysics class provided me with a metaphor that might help explain the matter.

Think of properties as being like paints and objects being like paintings. For those who just believe in properties and reject substrata, a painting would just be paint. Metaphorically, one would paint a painting by painting nothing; there would just be paint touching paint, forming a painting. Naturally, one could talk about letting some paint dry and then painting on that paint like a canvas, but there would be the question of what the first paint was painted on.

For those who accept substrata, the canvas (or other surface) would be like the substratum and the properties would be like paint: one paints on the canvas and the canvas plus the paint forms the painting. Naturally, metaphors and analogies tend to fall apart quickly when pressed. For example, one could say that one just needs to paint on the canvas until the paint dries, then peel the paint off. While this would be a flimsy painting, a painting made entirely of paint could thus exist. At least after it had been painted on a canvas. One could also reject the paint analogy or modify it in some manner so a painting could just be paint without a canvas. But the canvas and paint metaphor has a certain appeal.

When the left proposes to provide new and expanded benefits to non-rich Americans, the right replies with two stock arguments. The first is the deficit argument, which I addressed in my previous essay. The second is the Dependency argument.

The gist of the Dependency argument is that if people get assistance or benefits of a certain sort from the state, such as unemployment benefits or childcare, then they risk becoming dependent upon the state. Since this dependence is claimed to have negative consequences, such assistance and benefits should be limited or not provided. This can be seen as a utilitarian argument.

There are numerous variations of this argument which tend to focus on specific alleged harm. For example, it might be contended that if unemployment benefits are too generous then people will not want to work. As a specific illustration, in  April, 2020 Senator Lindsey Graham argued that public financial relief for the coronavirus would incentivize workers to leave their jobs. Other alleged harms include damage to the moral character of the recipients of such benefits and, on a larger scale, the creation of  a culture of dependency and a culture of entitlement. While this argument is passionately advanced by many on the right, there are two main issues that need to be addressed. The first is whether the argument is being made in good faith. The second is whether the argument is a good one from a logical standpoint.

Bad faith argumentation can occur in a variety of ways. One way is for a person to knowingly use fallacies or rhetoric as substitutes for good reasoning. Interestingly, a person can use fallacies and rhetoric in good faith when they do so unintentionally. In such cases, they are using bad logic in good faith. Another way is for a person to use premises they believe are untrue. Naturally, a person can make untrue claims in good faith; they do not realize their claims are untrue. Another way a person can argue in bad faith is to advance arguments that they do not believe in. This usually involves making arguments based on principles or reasons that they do not actually accept, while they pretend that they do.

Because of the problem of other minds, sorting out when people are engaged in bad faith argumentation can be challenging. After all, even if you can show that a person has used a fallacy or made a false claim, this does not itself prove they were arguing in bad faith: bad faith involves intent. Fortunately, there are ways to make a decent case that someone is engaged in bad faith and one of these is to provide evidence of inconsistency. This is, unfortunately, not always decisive: people can be sincerely inconsistent because they do not understand the implications of their claims and for other reasons that do not involve an intent at deceit. But in the case of the right, their dependency argument seems to generally be a bad faith argument.

If we take the Dependency arguments seriously, then they would also tell against inheritance, something beloved by the right since it helps entrench wealth and enhance inequality. In fact, philosophers have long made this argument.

Mary Wollstonecraft contends that hereditary wealth is morally wrong because it produces idleness and impedes people from developing their virtues. Inheritance is unearned. So, if receiving unearned resources creates dependency, then inheritance would create dependency. It could be countered that people can earn an inheritance, that it might be granted because of their hard work or some other relevant factor. While such cases would be worth considering, earning it hard work is not the usual way one qualifies for an inheritance. However, an earned inheritance would certainly not be subject to this argument. This exactly mirrors the conservative Dependency arguments, and they should, if they are consistent, agree with Wollstonecraft. But they clearly do not.

As one would expect, conservatives on the right generally favor protecting inheritance and oppose estate taxes. During the first Trump administration, the exemption to the estate tax increased to $5.49 million and in 2017 it increased again to $11.18 million. The Big Beautiful Bill also aimed at reducing taxes on the estates of the extremely wealthy.  This is inconsistent with their Dependency argument. If they truly fear that people getting small benefits from the state will create dependency and destroy incentives, then they should be terrified by such massive inheritances: these would, as Wollstonecraft argued, seem to be vastly more harmful. If one does not like the inheritance argument, then there is also the welfare for the wealthy argument.

While there are some exceptions, the right typically favors subsidies and benefits for corporations, businesses, and the wealthy. As such, it is hardly surprising that the bulk of social welfare spending benefits them rather than the poor. With almost no exceptions, one does not hear the people railing about the alleged dependency of the poor argue against these benefits and assistance. They are only concerned when the beneficiaries are the poor rather than the rich.

One can, of course, argue that there are relevant differences between benefits and assistance for the rich and those for those who are not rich. Often, these arguments also tend to be made in bad faith. A common tactic is to use the Perfect Analogy fallacy. This fallacy occurs when one takes the standards for assessing an argument by analogy to the extreme and imposes unreasonable requirements for similarity. This is the opposite of the Poor or False Analogy fallacy; this occurs when the standards are applied too laxly by the person making the argument.

 As a tactic, when using the Perfect Analogy Fallacy, one simply refuses to accept that the two things are similar, no matter what evidence or reasons are presented. As always, it can be challenging to prove that someone is doing this in bad faith, but one can sometimes push the person into trying to defend something that they clearly do not believe, and their bad faith becomes evident. That said, one must always be careful not to assume that a person who rejects an analogy must be arguing in bad faith or that they must be wrong—to refuse to consider their arguments would be an act of bad faith.

In closing, those who oppose the state helping the non-rich and use the Dependency argument generally seem to be arguing in bad faith. Naturally, if they have also argued against inheritance and benefits for the wealthy using the Dependency argument, then they can be  arguing in good faith. As far as whether benefits create dependency or destroy incentives to work, that is another matter. But the answer seems to be “no”, as long as one looks at the statistical data rather than simply speaking from ideology or “common sense.”

While this will not surprise anyone familiar with the state of police accountability in the United States, a  study reports that more than half of killings by police have been mislabeled over the past 40 years. As also would be expected by anyone familiar with American policing, black men are killed and their deaths mislabeled at disproportionally high rates. One objection to the claims made in the study is to point out the federal government lacks a comprehensive system of tracking police caused deaths and use of force. As such, no one can claim to know the actual numbers.

On the one hand, that is a reasonable criticism. While journalists and academics have been tracking police deaths and use of force, this is a piecemeal effort that depends on the ability of individual journalists and researchers to gather and confirm information. While the National-Use-of-Force Data Collection launched in 2019, most police departments decline to provide data.  As such, we do not know the exact number of police caused deaths nor the  exact percentage that have been mislabeled. We cannot also claim to know the exact number of police uses of force and what percentage of these were not justified.

 That said, the authors of the study are using the best available data from the National Vital Statistics System, Fatal Encounters, Mapping Police Violence, and the Guardian’s The Counted. This data, while incomplete, does provide a foundation for a reasonable inductive generalization. Naturally, we need to keep in mind the usual concerns about sample size and the possibility of a biased sample. But one cannot simply reject the claims by asserting the sample must be too small or biased; one would need to support these claims.

On the other hand, this criticism (perhaps ironically) points to a huge problem: we do not have accurate and complete data on police killings and use of force. While one could claim that the missing data could show that there is no problem, one would think that if this were true, then the police would support making that data available. After all, it would help address criticism of the police and serve to improve their reputation.

Requiring the police to provide this data would seem to be something that the left and right can agree on. The left, obviously, want that data. The right is constantly speaking of the dangers of government overreach, warning against tyrannical abuses of power, and demanding accountability. Since they were outraged by the cruel tyranny of the mask and vaccines, then their rage should be incandescent about the lack of police accountability. After all, a mask is at worst a slight discomfort while the police seem to be using the power of the state to get away with murder. I am, of course, not serious about this. I know that the right, in general, is onboard with the police using violence—if they are using it against people the right does not like. They do, of course, have a very different view when the police oppose them. But this does provide a way of using the bad faith rhetoric of the right against them. While this is not effective, it is at least a bit amusing. What is not funny is how police caused deaths are so often mislabeled.

While mislabeling can occur from error, an ongoing problem is that coroners and medical examiners can be too closely linked to law enforcement and in some cases a coroner can be a law enforcement official, such as a sheriff. There is a reasonable concern that a forensic examination conducted by someone associated with law enforcement or who is otherwise biased will not be accurate. The George Floyd case provides an example of how this can occur. As I argued in an earlier essay, this link needs to be broken to ensure that deaths caused by police are properly labeled. Other improvements would also need to be made, since there is a serious problem and it involves, of course, racism.

Black Americans are about 3.5 times more likely than whites to be killed by the police. Latinos and Native Americans are also more likely than whites to be killed. Looked at as a public health hazard, a person is more likely to be killed by the police than be killed while riding a bike—and bicycling is dangerous in the United States. Given the disproportionate killing of black Americans, it is not surprising that the study showed that 60% of their deaths were misclassified.  States vary considerably in the accuracy of their reporting. Based on the study, Maryland does the best with only 16% of killings misclassified. Oklahoma does the worst, with 83%.

The available data shows that the police are engaging in disproportionate killing and that most of their killings are being misclassified. While some of the misclassification might be due to errors, this would only explain some cases. If it is claimed that most of the misclassifications are due to errors, this would be to claim that the system is plagued with gross incompetence and thus would still need a radical overhaul to correct this problem. One could, of course, also claim that researchers and journalists are lying about the misclassification. Supporting this claim would require competing data and evidence. This, as noted above, would be quite a challenge: the police generally do not provide this data. As such, a person claiming that the study is in error would need their own credible source of information. Obviously, simply launching ad hominem attacks on journalists and researchers would not refute their claims.  

In closing, those who claim that the police are not engaged in disproportionate and unnecessary killing and that deaths are not being misclassified should support mandatory reporting by the police and overhauling who does the classification and how it is done. After all, if they are right, then accurate data would prove them right. Those that simply deny there is a problem while opposing efforts to gather accurate information might be engaged in wishful thinking or they might be aware of the problem but think that it is not a problem at all, that is, they are fine with what is happening and want it to keep happening.

The American right is partially defined by its embracing debunked conspiracy theories such as the big lie about the 2020 election and those involving all things COVID. While some conspiracy theories are intentionally manufactured by those who know they are untrue (such as the 2020 election conspiracy theories) other theories  might start by people being bad at reading things correctly. For example, consider the claim that there were microchips in the COVID vaccines because of Bill Gates.

The Verge does a step-by-step analysis of how this conspiracy theory evolved, which is an excellent example of how conspiracy claims arise, mutate, and propagate. The simple version is this: in a chat on Reddit, Gates predicted that people would have a digital “passport” of their health records. Some Americans who attended K-12 public schools have already used a paper version of this.  I have my ancient elementary school health records, which I recently consulted to confirm I had received my measles booster as a kid. As this is being written, measles has returned to my adopted state of Florida. The idea of using tattoos to mark people when they are vaccinated has also been suggested as a solution to the problem of medical records in places where record keeping is spotty or non-existent.

Bill Gate’s prediction was picked up by a Swedish website focused on biohacking which proposed using an implanted chip to store this information. This is not a new idea for biohackers or science fiction, but it was not Gate’s idea. However, the site used the untrue headline, “Bill Gates will use microchip implants to fight coronavirus.” As should surprise no one, the family tree of the conspiracy leads next to my adopted state of Florida.

Pastor Adam Fannin of Jacksonville read the post and uploaded a video to YouTube. The title is “Bill Gates – Microchip Vaccine Implants to fight Coronavirus,” which is an additional untruth on top of the untrue headline from the Swedish site. This idea spread quickly until it reached Roger Stone. The New York Post ran the headline “Roger Stone: Bill Gates may have created coronavirus to microchip people.”

Those familiar with telephone might see this as a dangerous version as each person changes the claim until it has almost no resemblance to the original. Just as with games of telephone, it is worth considering that people intentionally made changes. In the case of a game of telephone, the intent is to make the final version funny. In the case of conspiracy theories, the goal is to distort the original into the desired straw man. In the case of Bill Gates, it started out with the innocuous idea that people would have a digital copy of their health records and ended up with the claim that Bill Gates might have created the virus to put chips in people. In addition to showing how conspiracy claims can devolve from innocuous claims, this also provides an excellent example of how conspiracy theories sometimes do get it right that we should be angry at someone or something but get the reasons why we should be angry wrong.

While there is no good evidence for the conspiracy theories about Gates and microchips, it is true that we should be angry at Bill Gate’s COVID wrongdoings. Specifically, Gates used his foundation to impede access to COVID vaccines. This was not a crazy supervillain plan; it was “monopoly medicine.” As such, you should certainly loath Bill Gates for his immoral actions; but not because of the false conspiracy theories. As an aside, it is absurd that when there are so many real problems and real misdeeds to confront, conspiracy theorists spend so much energy generating and propagating imaginary problems and misdeeds. Obviously, these often serve some people very well by distracting attention from these problems. But back to the origin of conspiracy theories.

While, as noted above, people do intentionally make false claims to give birth to conspiracy theories, it also makes sense that unintentional misreading can be a factor. Having been a professor for decades, I know that people often unintentionally misread or misinterpret content.

For the most part, when professors are teaching basic and noncontroversial content, they endeavor to prove the students with a clear and correct reading or interpretation. Naturally, there can be competing interpretations and murky content in academics, but I am focusing on the clear, simple stuff where there is general agreement and little or no opposition. And, of course, no one with anything to gain from advancing another interpretation. Even in such cases, students can badly misinterpret things. To illustrate, consider this passage from the Apology:

 

Socrates: And now, Meletus, I will ask you another question—by Zeus I will:  Which is better, to live among bad citizens, or among good ones?  Answer, friend, I say; the question is one which may be easily answered.  Do not the good do their neighbors good, and the bad do them evil?

 

Meletus: Certainly.

 

Socrates: And is there anyone who would rather be injured than benefited by those who live with him?  Answer, my good friend, the law requires you to answer— does any one like to be injured?

 

Meletus: Certainly not.

 

Socrates: And when you accuse me of corrupting and deteriorating the youth, do you allege that I corrupt them intentionally or unintentionally?

 

Meletus: Intentionally, I say.

 

Socrates: But you have just admitted that the good do their neighbors good, and the evil do them evil.  Now, is that a truth which your superior wisdom has recognized thus early in life, and am I, at my age, in such darkness and ignorance as not to know that if a man with whom I have to live is corrupted by me, I am very likely to be harmed by him; and yet I corrupt him, and intentionally, too—so you say, although neither I nor any other human being is ever likely to be convinced by you.  But either I do not corrupt them, or I corrupt them unintentionally; and on either view of the case you lie.  If my offence is unintentional, the law has no cognizance of unintentional offences: you ought to have taken me privately, and warned and admonished me; for if I had been better advised, I should have left off doing what I only did unintentionally—no doubt I should; but you would have nothing to say to me and refused to teach me.  And now you bring me up in this court, which is a place not of instruction, but of punishment.

 

Socrates’ argument is quite clear and, of course, I go through it carefully because this argument is part of the paper for my Introduction to Philosophy class. Despite this, every class has a few students who read Socrates’ argument as him asserting that he did not corrupt the youth intentionally because they did not harm him. But Socrates does not make that claim; central to his argument is the claim that if he corrupted them, then they would probably harm him. Since he does not want to be harmed, then he either did not corrupt them or did so unintentionally. This is, of course, an easy misinterpretation to make by reading into the argument something that is not there but seems like it perhaps should or at least could be. Students are even more inclined to read Socrates as claiming that the youth will certainly harm him if he corrupts them and then build an argument around this erroneous reading. Socrates claims that the youth would be very likely to harm him if he corrupted them and so he was aware that he might not be harmed.

My point is that even when the text is clear, even when someone is actively providing the facts, even when there is no controversy, and even when there is nothing to gain by misinterpreting the text, it still occurs. And if this can occur in ideal conditions (a  clear, uncontroversial text in a class), then it should be clear how easy it is for misinterpretations to arise in “the wild.” As such, a person can easily misinterpret text or content and sincerely believe they have it right—thus leading to a false claim that can give rise to a conspiracy theory. Things are much worse when a person intends to deceive. Fortunately, there is an easy defense against such mistakes: read more carefully and take the time to confirm that your interpretation is the most plausible. Unfortunately, this requires some effort and the willingness to consider that one might be wrong, which is why misinterpretations occur so easily. It is much easier to go with the first reading (or skimming) and more pleasant to simply assume one is right.