Back in my graduate school days, I made extra money writing for science fiction and horror gaming companies. This was in the 1990s, which was the chrome age of cyberpunk: the future was supposed to be hacked and jacked. The future is now, but is an age of Tinder, Facebook, and Tik Tok. But there is still hope of a cyberpunk future: body hackers are endeavoring to bring some cyberpunk into the world. The current state of the hack is disappointing but, great things arise from lesser things and hope remains for a chromed future.

Body hacking, at this point, is minor. For example, some people have implanted electronics under their skin, such as RFID chips. Of course, most dogs also have an implanted chip. As another example, one fellow who is color blind has a skull mounted device that informs him of colors via sounds. As one might imagine, body hacks that can be seen have generated some mockery and hostility. Since I owe cyberpunk for my ability to buy ramen noodles and puffed rice cereal, I am obligated to come to the defense of the aesthetics of body hacking.

While some point out that philosophers have not given body hacking the attention it deserves and claim that it is something new and revolutionary, it still falls under established moral systems. As such, body hacking is a new matter for applied ethics but does not require a new moral theory.

The aesthetic aspects of body hacking fall under the ethics of lifestyle choices, specifically those regarding choices of personal appearance. This can be shown by drawing analogies to established means of modifying personal appearance. The most obvious modifications are clothing, hairstyles and accessories (such as jewelry). These, like body hacking, have the capacity to shock and offend people, perhaps by what is revealed by the clothing or the message sent by it (including literal messages, such as T-shirts with text and images).  Unlike body hacking, these modifications are on the surface, thus making them different from true body hacking.

As such, a closer analogy would involve classic cosmetic body modifications. These include hair dye, vanity contact lenses, decorative scars, piercings, and tattoos. In fact, these can be seen as low-tech body hacks that are precursors to the technological hacks of today. Body hacks go beyond these classic modifications and range from the absurd (a man growing an “ear” on his arm) to the semi-useful (a person who replaced a missing fingertip with a USB drive). While concerns about body hacking go beyond the aesthetic, body hacks do have the capacity to elicit responses like other modifications. For example, tattoos were once regarded as the mark of a lower-class person, though they are now accepted. As another example, not long ago men (other than pirates) did not get piercings unless they were willing to face ridicule. Now piercing is passé.

Because the aesthetics of body hacking are analogous to classic appearance hacks, the same ethics applies to these cases. Naturally enough, people vary in their ethics of appearance. I, as veteran readers surely suspect, favor John Stuart Mill’s approach to the matter of the ethics of lifestyle choices. Mill argues that people have the right to interfere with liberty only to prevent a person from harming others. This is a reasonable standard of interference which he justifies on utilitarian grounds. Mill explicitly addresses the ways of life people chose: “…the principle requires liberty of tastes and pursuits; of framing the plan of our life to suit our own character; of doing as we like, subject to such consequences as may follow; without impediment from our fellow-creatures, so long as what we do does not harm them even though they should think our conduct foolish, perverse, or wrong.”

Mill’s principle nicely handles the ethics of the aesthetics of body hacking (and beyond): body hackers have the moral freedom to hack themselves even though such modifications might be regarded as aesthetically perverse, foolish, or wrong. So, just as a person has the moral right to wear clothing that some would regard as too revealing or dye his hair magenta, a person has the moral right to grow a functionless ear on his arm or implant blinking lights under her skin. But just as a person would not have a right to wear a jacket covered in razor blades, a person would not have the right to hack herself with an implanted strobe light that flashes randomly into people’s eyes. This is because such things become the legitimate business of others because of the harm they can do.

Mill does note that people are subject to the consequences of their choices and not interfering with someone’s way of life does not require accepting it, embracing it or even working around it. For example, just as a person who elects to have “Goat F@cker” tattooed on his face can expect challenges in getting a job as a bank teller or schoolteacher, a person who has blinking lights embedded in his forehead can expect to encounter challenges in employment. Interestingly, the future might see discrimination lawsuits on the part of body hackers, analogous to past lawsuits for other forms of discrimination. It can also be expected that social consequences will change for body hacking, just as it occurred with tattoos and yoga pants.

One final point is the stock concern about the possible harm of offensive appearances. That is, that other people do have a legitimate interest in the appearance of others because their appearance might harm them by offending them. While this is worth considering, there does not seem to be a compelling reason to accept that mere offensiveness causes sufficient harm to warrant restrictions on appearance. What would be needed would be evidence of actual harm to others that arises because the appearance inflicts the harm rather than the alleged harm arising because of how the offended person feels about the appearance. To use an analogy, while someone who hates guns has the right not to be shot, he does not have the right to insist that he never see images of guns.

The discussion has shown that body hacking that does not inflict harm to others falls nicely under the liberty to choose a way of life, provided that the way of life does not inflict harm on others. But, as always, a person who strays too far from the norm must be aware of possible consequences. Especially when it comes to dating and employment.

While I have been playing video games since the digital dawn of gaming, it was not until I completed Halo 5 that I gave some philosophical consideration to video game cut scenes. For those who are not familiar with cut scenes, they are non-interactive movies within a game. They are used for a variety of purposes, such as providing backstory, showing the consequences of the player’s action or providing information (such as how adversaries or challenges work).

The reason  Halo 5 motivated me to write this is an unfortunate one: Halo 5 made poor use of cuts scenes and will argue for this  claim as part of my sketch of a philosophical cut scene theory. Some gamers, including director Guillermo Del Toro and game designer Ken Levine, have spoken against the use of cut scenes. In support of their position, a reasonable argument can be presented.

One fundamental difference between a game and a movie is the distinction between active and passive involvement. In a typical movie, the audience merely experiences the movie as observers and do not influence the outcome. In contrast, gamers experience the game as participants in that they some control over the events. A cut scene, or in game movie, changes the person from being a player to being an audience member. This is analogous to taking a person playing baseball and then moving them into the bleachers to watch the game. They are, literally, taken out of the game. While many enjoy watching sports, the athlete is there to play and not to be part of the audience. Likewise, while watching a movie can be enjoyable, a gamer is there to game and not be an audience member. To borrow from Aristotle, games and movies each have their own proper pleasures and mixing them together can harm the achievement of this pleasure.

 Aristotle, in the Poetics, is critical of the use of the spectacle (such as what we would now call special effects) to produce the tragic feeling of tragedy. He contends that this should be done by the plot. Though this is harder to do, the effect is better. In the case of a video game, the use of cinematics can be regarded as an inferior way of bringing about the intended experience of a game. The proper means of bringing about the effect should lie within the game itself so that the player is playing and not merely observing. As such, cut scenes should be absent from games. Or, at the very least, kept to a minimum.

One way to counter this argument is to draw an analogy to table top role-playing games such D&D, Pathfinder and Call of Cthulhu. Such games typically begin with something like a video game’s opening cinematic: the game master sets the stage for the adventure to follow. During play, there are often important events that take considerable game world time but would be boring to play out in real time. For example, a stock phrase used by most game masters is “you journey for many days”, perhaps with some narrative about events that are relevant to the adventure, such as the party members becoming friends along the way. There are also other situations in which information needs to be conveyed, or stories told that do not need to be played out because doing so would not be enjoyable or would be needlessly time consuming. A part of these games is shifting from active participant to briefly taking on the role of the audience. However, this is rather like being on the bench listening to the coach rather than being removed from the field and put into the bleachers. While one is not actively playing at that moment, it is still an important part of the game and the player knows they will be playing soon.

In the case of video games, the same sort of approach would also seem to fit, at least in games that have story elements that are important to the game (such as plot continuity, background setting, maintaining some realism, and so on) yet would be tedious, time consuming or beyond the mechanics of the game to play through. For example, if the game involves the player driving through a wasteland to the ruins of a city she wishes to explore, then a short cut scene that illustrates the desolation of the world would be appropriate. After all, driving for hours through a desolate wasteland would be very boring.

Because of the above argument, I do think that cut scenes can be a proper part of a video game, if they are used well. This requires, but is not limited to, ensuring that the cut scenes are necessary and that the game would not be better served by either deleting the events or address them with game play. It is also critical that the player does not feel they have been put into the bleachers, although a benched feeling can be appropriate. As a rule, I look at cut scenes as analogous to narrative in a tabletop role-playing game: a cut scene in a video game is fine if narrative would be fine in an analogous situation in a tabletop game.

Since I was motivated by Halo 5’s failings, I will use it as an example of the bad use of cut scenes. Going with my narrative rule, a cut scene should not contain things that would be more fun to play than watch, unless there is some greater compelling reason why it must be a cut scene. Halo 5 routinely breaks this rule. A rather important sub-rule of this rule is that major enemies should be dealt with in game play and not simply defeated in a cut scene. Halo 5 broke this rule right away. In Halo 4 Jul ‘Mdama was built up as a major enemy. As such, it was rather surprising that he was knifed to death in a cut scene near the start. This would be like setting out to kill a dragon in Dungeons & Dragons and having the dungeon master allow you to fight their orc and goblin minions, but then just say “Fred the fighter hacks down the dragon. It dies” in lieu of playing out the fight with the dragon. Throughout Halo 5 there were cut scenes to which I and my gaming buddy said,  “huh, that would have been fun to play rather watch.” That, in my view, is a mark of bad choices about cut scenes.

The designers also made the opposite sort of error: making players engage in tedious “play” that would have been far better served by short cut scenes. For example, there are parts where the player must engage in tedious travel (such as ascending a damaged structure). While it would have been best to make it interesting, it would have been less bad to have a quick cut scene of the Spartans scrambling to safety. The worst examples, though, involved “game play” in which the player remains in first person shooter view, but cannot use any combat abilities. For example, in one section of play the goal is to walk around trying to find various people to “talk” to. The conversations are scripted: when you reach the person, the non-player character t says a few things and your character says something back. There are no dialogue choices. These should have been handled by short cut scenes. After all, when playing a first person shooter, I do not want to walk around unable to shoot  while I trigger uninteresting recorded conversations.  These games are supposed to be “shoot and loot” not “walk and talk.”

To conclude, I take the view of cut scenes that Aristotle takes of acting: while some condemn all cut scenes and all acting (it was argued by some that tragedy was inferior to the epic because it was acted out on stage), it is only poor use of cut scenes (and poor acting) that should be condemned. I do condemn Halo 5’s cutscenes.

In my previous essay I introduced the notion of using the notion of essential properties to address the question of whether James Bond must be a white man. I ran through this rather quickly and want to expand on it here.

As noted, an essential property (to steal from Aristotle) is a property that an entity must have. In contrast an accidental property is one that it does have but could lack. As I tell my students, accidental properties are not just properties from accidents, like the dent in a fender.

One way to look at essential properties is that if a being loses an essential property, it ceases to be. In effect, the change of property destroys it, although a new entity can arise. To use a simple example, it is essential to a triangle that it be three-sided. If another side is added, the triangle is no more. But the new entity could be a square. Of course, one could deny that the triangle is destroyed and instead take it as changing into a square. It all depends on how the identity of a being is determined.

Continuing the triangle example, the size and color of a triangle are accidental properties.  A red triangle that is painted blue remains a triangle, although it is now blue. But one could look at the object in terms of being a red object. In that case, changing the color would mean that it was no longer a red object, but a blue object. Turning back to James Bond and his color, he has always been a white man.

Making Bond a black man would change many of his established properties and one can obviously say that he would no longer be white Bond. But this could be seen as analogous to changing the color of a triangle: just as a red triangle painted blue is still a triangle, changing Bond from a white to a black man by a change of actors does not entail that is no longer Bond. Likewise, one might claim, for changing Bond to a woman via a change of actor.

As noted in the previous essay, the actors who have played Bond have been different in many ways, yet they are all accepted as Bond. As such, there are clearly many properties that Bond has accidentally. They can change with the actors while the character is still Bond. One advantage of a fictional character is, of course, that the author can simply decide on the essential properties when they create the metaphysics for their fictional world. For example, in fantasy settings an author might decide that a being is its soul and thus can undergo any number of bodily alterations (such as through being reincarnated or polymorphed) and still be the same being. If Bond was in such a world, all a being would need to be Bond would be to be the Bond soul. This soul could inhabit a black male body or even a dragon and still be Bond. Dragon Bond could make a great anime.

But, of course, the creator of Bond did not specify the metaphysics of his world, so we would need to speculate using various metaphysical theories about our world.  The question is: would a person changing their race or gender result in the person ceasing to be that person, just as changing the sides of a triangle would make it cease to be a triangle? Since Bond is a fictional character, there is the option to abandon metaphysics and make use of other domains to settle the matter of Bond identity. One easy solution is to go with the legal option.

Bond is an intellectual property, and this means that you and I cannot create and sell Bond books or films. As such, there is a legal definition of what counts as James Bond, and this can be tested by trying to see what will get you sued by the owner of James Bond. Closely related to this the Bond brand; this can change considerably and still be the Bond brand. Of course, these legal and branding matters are not very interesting from a philosophical perspective, and they are best suited for the courts and marketing departments. So I will now turn to aesthetics.

One easy solution is that Bond is whoever the creator says Bond is; but since the creator is dead, we cannot determine what he would think about re-writing Bond as someone other than a white man. One could, of course, go back to the legal argument and assert that whoever owns Bond has the right to decide who Bond is.

Another approach is to use the social conception: a character’s identity is based on the acceptance of the fans. As such, if the fans accept Bond as being someone other than a white man, then that is Bond. After all, Bond is a fictional character who exists in the minds of his creator and his audience. Since his creator is dead, Bond now exists in the minds of the audience; so perhaps it is a case of majority acceptance, a sort of aesthetic democracy. Bond is whom most fans say is Bond. Or one could take the approach that Bond is whoever the individual audience member accepts as Bond; a case of Bond subjectivity. Since Bond is fictional, this is appealing. As such, it would be up to you whether your Bond can be anyone other than a white man. A person’s decision would say quite a bit about them. While some might be tempted to assume that anyone who believes that Bond must be a white man is thus a racist or sexist, that would be a mistake. There can be non-sexist and non-racist reasons to believe this. There are, of course, also sexist and racist reasons to believe this.  As a metaphysician and a gamer, I am onboard with Bond variants that are still Bond. But I can understand why those who have different metaphysics (or none at all) would have differing views.  

Since his creation, James Bond has been a white man. Much to the delight of some and to the horror of others, there were serious plans to have a black actor play James Bond. There has even been some talk about having a female James Bond. While racist and sexist reasons abound to oppose such changes, are there good reasons for James Bond to always be a white man? Before getting into this discussion, I will first look at the matter of the 007.

While James Bond has been known as 007, this is his agent designation and there are other 00 agents.  This is like the number used by an athlete on a team. As such, while James Bond has been 007, another person could replace him and get that number, just the person who was 23 on a baseball team could retire and someone else could get that number (although teams do retire numbers). Within the James Bond universe, it would make sense for someone who is not a white man to get the 007. This could occur for any number of in universe reasons, most obviously that James Bond is not immortal and would eventually be too old or dead to remain 007. From an aesthetic standpoint, it would be interesting to see a Bond timeline in which time mattered, a Bond world in which he grew old, and a new agent took his place. This would have the benefit of keeping Bond relevant to today while also maintaining (in universe) the old Bond. There is, of course, the obvious financial risk: having a new 007 who is not James Bond can be seen as analogous to replacing a star athlete with a new person who gets their number. There is the risk of losing the drawing power. But my concern is with the more interesting matter of whether James Bond must be a white man, so I will leave the money worries to the branding gurus.

One obvious fact about the Bond of the movies is that different actors have played the character. While there are strong opinions about the best Bond, there was little debate about whether a new white man should take the role when the previous Bond aged out of the role or left for other reasons. The actors who played Bond were (in general) accepted as at least adequate for the role and there was no debate about whether the character was James Bond despite the change in actors. That is, there is no general issue with a new actor playing the role. There was also, obviously enough, no effort to explain in the Bond universe the change in Bond’s appearance. I mention this because of another famous character from United Kingdom fiction, Dr. Who. When Dr. Who began, the actor playing the doctor was already old and they ran into the problem of age. They hit on a brilliant solution: Dr. Who regenerates and radically changes appearance, though remaining the same person. This gives the show an interesting feature: continuity of character through changes of actors with an in-universe explanation.

While Bond movies do feature gadgets and plots that border or even cross into science fiction (consider Moonraker), it is unlikely that the Bond cinematic universe would allow for such science fiction devices as alternative realities, such as in Marvel’s What If…? As such, the various Bonds are not explained in terms of being alternative or variant Bonds; they are all the James Bond. Now, if Bond can remain Bond despite the changes of actors, then it would seem that he would remain Bond even if he were played by a non-white actor. After all, if switching from Sean Connery did not mean that Bond was no longer Bond, then changing his race should not do that either. After all, the actors that played Bond are different people, with significant differences in appearance, mannerisms, and voice. Having a black actor, for example, would just be another change of appearance.  It would also seem to follow that having a female actor play Bond would also make as much sense; it would just be another change in appearance. But one could attempt to argue that it is essential to Bond that he be a white man. This, of course, gets us into the notion of essential properties.

In philosophy, an essential property (to steal from Aristotle) is a property that an entity must have or cease to be that thing. In contrast an accidental property is one that it does have but could lack and still remain that thing. To use a simple example, it is essential to a triangle that it be three-sided. It must have three sides to be a triangle. But the size and color of a triangle are accidental properties; they can change, and it will still remain a triangle. So, the relevant issue here is whether being a white man is essential to being James Bond or merely accidental. Given all the changes in actors over the years, there are clearly many properties that Bond has accidentally as they can change with the actors while the character is still Bond. One advantage of a fictional character is, of course, that the author can simply decide on the essential properties when they create the metaphysics for their fictional world. But, of course, the creator of Bond did not do that, so we need to speculate using various metaphysical theories about our world. That is, would a person changing their race or gender result in the person ceasing to be, just as changing the sides of a triangle would make it cease to be a triangle? On the face of it, while such changes would clearly alter the person, they would seem to retain their personal identity. If this is true, then James Bond need not be a white man. But more will be said in the next essay.

 

In fiction, race/gender swapping occurs when an established character’s race or gender is changed. For example, the original Nick Fury character in Marvel is a white man but was changed to a black man in the Ultimates and in the Marvel Cinematic Universe. As another example, the original Dr. Smith in the TV show Lost in Space is a man; the Netflix reboot made the character a woman. As would be expected, some people are enraged when a swap occurs. Some are open about their racist or sexist reasons for their anger and are clear that they do not want females and non-white people in certain roles. Some criticize a swap by asking why there was a swap instead of either creating a new character or focusing on a less well-known existing character. For example, a critic of the He-Man reboot might be angry that King Grayskull was changed from white to black and raise the critical question “what about Clamp Champ?

Such questions can be asked in bad faith; the person asking them makes it clear that they are angry that minorities and women are allowed to take traditional white male roles. As such, it is not that they want new women or minority characters or more focus on existing characters, the question is a cover for their racism and sexism. These questions serve well in this role as they are not overtly racist or sexist. In fact, when raised in good faith, these are reasonable aesthetic questions. Unfortunately, these questions are now well-established as dog-whistles that allow people to hide their racism and sexism from “normies” while sending a clear signal to those in the know. That some people use these questions without racist or sexist intent helps maintain their innocuous appearance. Someone using them as racist or sexist tools can claim, in bad faith, that they are just asking reasonable questions. And then go on to rage against how “the woke” are ruining everything by compelling race/gender swaps and forcing diversity. Those who call them out on this can seem crazy to those who do not understand the context. But let us ask these questions in good faith.

The most obvious practical reason why race/gender swapping is used instead of creating a new character or focusing on an established character is money. Creating and branding a new character (and building up a fan base) takes time and resources. And it is a gamble, since there is also no guarantee of success. So, keeping the Nick Fury character while making him black made more practical sense than creating a new character to serve as the head of SHIELD. While less well-known characters can become a great success (for example, the Guardians of the Galaxy), this is risky as there  are often reasons why such characters are less well known. But this only explains why a new character was not created or why focus was not shifted, it does not explain why the race/gender swap occurred. Fortunately, this is easy to explain and even justify.

While some critics claim that the liberals, feminists and the woke are forcing companies to gender/race swap, these companies seem to be doing this for the same reason they do almost anything: money. Their marketing and research folks are aware that demographics and perceptions change. So, whereas fiction dominated by white male characters was the moneymaker in the past, more diverse characters appeal to some audiences now. If these changes were purely political and hated by most people, these swaps would be consistent and constant failures. This is not to make the absurd claim that they all succeed, just that they do not seem unusually prone to financial failure. Those who say “go woke, go broke” tend to cherry pick their example of failures and ignore the abundance of unsuccessful media that is “traditional” rather than “woke.”

No nefarious conspiracies are needed to explain swaps; this is just businesses trying to maximize profits by minimizing cost and exploiting established brands. Demographics and values change and this explains both the swaps and the rage at the swaps. 

It is also worth noting that despite the hyperbole about Hollywood not having new ideas, new characters do get created often. Netflix, for example, floods its service with new shows with new characters which often include females and non-white people. And attempts are made to focus on characters that have been overlooked. These efforts often make the people who ask, “why not create a new character?” angry as it exposes that they ask this question in bad faith. Aside from money, are there good reasons to race/gender swap rather than  create a new character or focus on an existing character?

One excellent aesthetic reason is that doing so can make for an interesting plot that explicitly explores the influence of race and gender on the character and story. For example, one episode of Marvel’s What If..? Explores what would have happened if Peggy Carter had become a super soldier rather than Steve Rogers. This swap has a meaningful impact on the story in part because of the assumed gender roles of that time (and now). I think this is one of the best aesthetic justifications for such swaps.  Obviously, some people get very angry about such explorations.

Another good aesthetic reason, especially in a reboot, is to use the gender/race swap to create new story and character dynamics. While the focus is not on exploring race/gender issues, these do become new elements for an old character in telling a new story. This also tends to make some people very mad.

There are also various moral reasons to make such changes. One reason is to provide people with characters they can more easily identify with. While critics will claim that people should be able to identify with any hero, ironically this would favor allowing such swaps. After all, if people can identify with any hero, then it should not matter if they were race/gender swapped. Another moral reason is to help foster parasocial relationships using the power of established characters. One reason racists and sexists dislike diversity in fiction is that people can form parasocial relationships and this can make them more tolerant which is something racists and sexists oppose. There are, of course, bad reasons to race/gender swap.

Some might consider the “make money” reason to be a bad one, which is not unreasonable from an aesthetic or moral standpoint. If the swap is purely to make money and it has no aesthetic or moral justification, then criticism would seem warranted. But a swap could make money and be independently warranted on ethical or aesthetic grounds. Also, one would need to be consistent in such criticisms. To use an analogy, the Toyota Corolla of today is radically different from when I was a kid; yet the brand name is kept because doing so is advantageous and helps make money. But people do not get very angry about that.

As noted above, some claim that the swaps are compelled by political actors such as the liberals, SJWs,  the woke, and feminists. If a swap were just the result of political compulsion and it lacked all ethical and aesthetic merit, then that swap should be criticized. But a swap could be compelled but also independently warranted on ethical or aesthetic grounds. It is also worth mentioning again that companies are motivated by profit; their political stances are shaped by the bottom line. And even if they were driven by politics or ideology, one would still need to show that their politics and ideology are bad. They usually are, but for different reasons.

While most swaps are motivated by hope of profits, there are good reasons to race/gender swap a character rather than creating a new one. But creating new characters or focusing on less well-known characters are also good options—it all depends on what one is trying to do. Ideally, the swap would be to tell a better story; but there is nothing inherently wrong with swapping for any number of other reasons.

Cuphead on SteamDirect

Inuendo Studios presents an excellent and approachable analysis of the infamous Gamer Gate and its role in later digital radicalization. This video inspired me to think about manufactured outrage, which reminded me of the fake outrage over such video games as Cuphead and Doom. There was also similar rage against the She-Ra and He-Man reboots. Mainstream fictional outrage against fiction involved the Republican’s rage against Dr. Seuss being “cancelled.” Unfortunately, fictional outrage can lead to real consequences, such as death threats, doxing, swatting, and harassment. In politics fictional outrage is weaponized for political gain, widens the political divide between Americans, and escalates emotions. In short, fictional outrage at fiction makes reality worse. 

I call this fictional outrage at fiction for two reasons. The first is that the outrage is fictional: it is manufactured and based on untruths. The second is that the outrage is at works of fiction, such as games, TV shows, movies, and books. Since Thought Slime, Innuendo Studios, Shaun, and others have ably gone through examples in detail, I will focus on some of the rhetorical and fallacious methods used in fictional outrage at fiction. These methods also apply to non-fiction targets as well, but I am mainly interested in fiction here. Part of my motivation is to show that some people put energy into enraging others about make-believe things like games and TV shows. While fiction is subject to moral evaluation, it should be remembered that it is fiction. Although our good dead friend Plato would certainly take issue with my view.

While someone can generate fictional outrage by complete lies, this is usually less effective than using some residue of truth. Hyperbole is an effective tool for this task. Hyperbole is usually distinguished from outright lying because hyperbole is an exaggeration rather than a complete fabrication. For example, if someone says they caught a huge fish they would be simply lying if they caught nothing but would be using hyperbole if they caught a small fish. There can be debate over what is hyperbole and what is simply a lie. For example, when the Dr. Seuss estate decided to stop publishing six books, the Republicans and their allies claimed Dr. Seuss had been cancelled by the left. While it was true that six books would not be published, it can be argued whether saying the left cancelled them is hyperbole or simply a lie. Either way, of course, the claim is not true.

   Even if the target audience knows hyperbole is being used, it can still influence their emotions, especially if they want to believe. So, even if someone recognizes that the “wrongdoing” of a games journalist was absurdly exaggerated, they might still go along with the outrage. A person who is particularly energetic and dramatic in their hyperbole can also use their showmanship to augment its impact.

The defense against hyperbole is, obviously, to determine the truth of the matter. One should always be suspicious of claims that seem extreme or exaggerated, although they should not be automatically dismissed as extreme claims can be true. Especially since we live in a time of extremes.

A common fallacy used in fictional outrage is the Straw Man. This fallacy is committed when someone ignores an actual position, claim or action and substitutes a distorted, exaggerated, or misrepresented version of it. This fallacy often involves hyperbole. This sort of “reasoning” has the following pattern:

 

  1. Person A has position X/makes claim X/did X.
  2. Person B presents Y (which is a distorted version of X).
  3. Person B attacks Y.
  4. Therefore, X is false/incorrect/flawed/wrong.

 

This sort of “reasoning” is fallacious because attacking a distorted version of something does not constitute an attack on the thing itself. One might as well expect an attack on a drawing of a person to physically harm the person. To illustrate the way the fallacy is often used, consider what happened to start the “outrage” over Cuphead. A writer played an early version of the game badly, noted that they were doing badly, and were generally positive about the game. All this was ignored by those wanting to manufacture rage: they presented it as a game journalist condemning the game for being too hard because they are bad at games. And it escalated from there.  

The Straw Man fallacy is an excellent way to manufacture rage; one can simply create whatever custom villain they wish by distorting reality. As with hyperbole, there is the question of what distinguishes a straw man from a complete fabrication; the difference is that the Straw Man fallacy starts with some truth and then distorts it. To use the Cuphead example, if a person had never even played Cuphead or said anything about it, saying that they hated the game because they are incompetent would be a complete fabrication rather than a straw man.

Straw Man attacks tend to work because people generally do not bother to investigate the accuracy of claims they want to believe; and even if they are not already invested in the claim, checking a claim takes some effort. It is easier to just believe (or not) without checking. People also often expect others to be truthful, which is increasingly unwise.

The defense against a Straw Man is to check the facts. Ideally this would involve going to the original source or at least using a credible and objective source.

A third common fallacy used in fictional outrage is Hasty Generalization. This fallacy is committed when a person draws a conclusion about a population based on a sample that is not large enough. It has the following form:

 

Premise 1. Sample S, which is too small, is taken from population P.

Conclusion: Claim C is drawn about Population P based on S.

 

The person committing fallacy is misusing the following type of reasoning, which is known as Inductive Generalization, Generalization, and Statistical Generalization:

 

Premise 1: X% of all observed A’s are B’s.

Premise : Therefore X% of all A’s are B’s.

 

The fallacy is committed when not enough A’s are observed to warrant the conclusion. If enough A’s are observed, then the reasoning is not fallacious. Since Hasty Generalization is committed when the sample (the observed instances) is too small, it is important to have samples that are large enough when making a generalization.

This fallacy is useful in creating fictional outrage because it enables a person to (fallaciously) claim that something is widespread based on a small sample. If the sample is extremely small and it is a matter of an anecdote, then a similar fallacy, Anecdotal Evidence, can be committed. This fallacy is committed when a person draws a conclusion about a population based on an anecdote (a story) about one or a very small number of cases. The fallacy is also committed when someone rejects reasonable statistical data supporting a claim in favor of a single example or small number of examples that go against the claim. The fallacy is considered by some to be a variation of Hasty Generalization.  It has the following forms:

 

Form One

Premise 1: Anecdote A is told about a member (or small number of members) of Population P.

Conclusion: Claim C is made about Population P based on Anecdote A.

 

Form Two

  1. Reasonable statistical evidence S exists for general claim C.
  2. Anecdote A is presented that is an exception to or goes against general claim C.
  3. Conclusion: General claim C is rejected.

 

People often fall victim to this fallacy because stories and anecdotes have more psychological influence than statistical data. This leads people to infer that what is true in an anecdote must be true of the whole population or that an anecdote justifies rejecting statistical evidence. Not surprisingly, people usually accept this fallacy because they prefer that what is true in the anecdote be true in general. For example, if one game journalist is critical of a game because it has sexist content, then one might generate outrage by claiming that all game journalists are attacking all games for sexist content.

A person can also combine rhetorical tools and fallacies. For example, an outrage merchant could use hyperbole to create a straw man of an author who wrote a piece about whether video game characters should be more diverse and less stereotypical. The straw man could be something like this author wants to eliminate white men from video games and replace them with women and minorities. This straw man could then be used in the fallacy of Anecdotal Evidence to “support” the claim that “the left” wants to eliminate white men from video games and replace them with women and minorities.

The defense against Hasty Generalization and Anecdotal Evidence is to check to see if the sample size warrants the conclusion being drawn. One way that people try to protect their claims from such scrutiny is to use an anonymous enemy. This is done by not identifying their sample’s members but referring to a vague group such as “those people”, “the left”, “SJWs”, “soy boys”, “the woke mob”, or whatever. If pressed for specific examples that can be checked, a common tactic is to refer to someone who has been targeted by a straw man fallacy and just use Anecdotal Evidence again. Another common “defense” is to respond with anger and simply insist that there are many examples, while never providing them. Another tactic used here is Headlining.

In this context, Headlining occurs when someone looks at the headline of an article and then speculates or lies about the content. These misused headlines are often used as “evidence”, especially to “support” straw man claims. For example, an article might be entitled “Diversity and Inclusion in Video Games: A Noble Goal.” The article could be a reasoned and balanced piece on the merits and cons of diversity and inclusion in video games. But the person who “headlines” it (perhaps by linking to it in a video or including just a screen shot) could say that the piece is a hateful screed about eliminating white men from video games. This can be effective for the same reason that the standard Straw Man is effective; few people will bother to read the article. Those who already feel the outrage will almost certainly not bother to check; they will simply assume the content is as claimed (or perhaps not care).

There are many other ways to create fictional outrage at fiction, but I hope this is useful in increasing your defense against such tactics.

This contains many spoilers. When I first saw the trailer for The Tomorrow War my thought was “I wonder who that discount Chris Pratt is?” When I realized it was the actual Chris Pratt, my thought was “he must really need money.” Yes, it is exactly that kind of movie. I will start with some non-philosophical complaints and then move on to what is most interesting (and disappointing) about the flick: time travel.

Like many war movies of its ilk, this flick handles armored fighting vehicles by leaving them out. Instead, the human forces confront the aliens with infantry, Humvees, transport helicopters, and fighter-bombers. Oddly, the infantry is armed with standard guns that are largely ineffective against the aliens.  This is even though they know this and there are plenty of existing infantry weapons that would kill the aliens. No armored fighting vehicles (like tanks) are used, and Humvees are the mainstay of the forces. They get easily destroyed by the aliens charging into them like deranged moose (except when the main characters are in one). Maybe leaving out armored vehicles is a budget issue, but it mainly seems because the aliens, which are basically animals, would be slaughtered by modern armor. They could do no damage, and antivehicle weapons would slaughter them. My theory is that rather than come up with an alien that could beat armor, the writers just leave out armored vehicles. The transport helicopters, as one would expect in such a film, generally fly within the leaping range of the aliens and attack helicopters do not exist (they do have armed drones, though). The fighter-bombers exist, as always, as a stupid plot device: in one part of the movie the hero is tasked with rescuing research that is the last hope for victory, yet an air strike is called on the otherwise empty city and it cannot be called off. But enough of that, on to the time travel.

Time travel is always a mess in philosophy, science, and fiction. But it can be fun if used properly. The movie does have an interesting, though unoriginal, premise: humans in the future have built a time machine and are using it to recruit soldiers and supplies from the past to fight the aliens that have killed all but 500,000 people. As movies must, the movie puts limits on time travel. The biggest limitation is that the time “tunnel” has a fixed temporal range of 30 years. When people go forward, they go forward thirty years. When they go back, they go back thirty years. One of the minor characters explains it in terms of two connected rafts in a river: they always stay the same distance apart but move along with the river. One of the supporting characters asks the obvious question as to why they do not make more rafts. The answer is that the time machine they have is held together with bubble gum and chicken wire, so they cannot build another one. While not the worst answer a writer could come up with, it is stupid within the rules of the movie: people and equipment can move freely between the present and future. More time machines could be made in the past and brought to the future. They could even build a time machine in the present and open a time tunnel to 30 years earlier, giving humanity another 30 years of preparation time. And then do that repeatedly until the paradoxes destroy reality. A better answer would have been some techno-metaphysical babble about how the time stream can only permit one time tunnel to operate. But let us get back to the fact that people and things can move between the times.

At one critical point in the movie, the heroes have completed a toxin that will kill the female aliens. But just as they complete it, their last base is overrun, and Chris Pratt is recalled to the past, with the toxin. The time machine is done, so the war has been lost. Apparently having struck his head in the fall, Pratt thinks he has no way of getting the toxin to the future, so everything is lost. The nations of the world also just sort of decide to give up as well, which would make sense if everyone believed in metaphysical determinism. Pratt’s character apparently lost the ability to understand how time works: the toxin he has in the present will eventually reach the future. It will just travel one day at a time towards that future.

Going back to the raft analogy, the time machine is like a pneumatic tube that has a fixed length, it can quickly move things back and forth over that distance. But, and here is how normal time works, one can also walk an object to towards the other end of the tube in the future. As such, when the aliens show up, the humans will have as much toxin as they wish to make to use against them. This feature of time would also allow the humans to plan their missions very effectively. To illustrate, I will use a smaller version of the time tunnel thing.

Suppose that on 12/5/2026 I build a time tunnel that reaches back 1 year (roughly). On that day, the tunnel pops open on 12/5/2025 and Mike 2026 can hand Mike 2025 a usb drive full of useful information (such as winning lottery numbers, weather reports, news reports on disasters, and so on). How would this be possible? Here is how. When Mike 2026 arrives, he tells Mike 2025 to fill up the drive. Mike 2025 spends the year doing just that, so in 2026 the drive is full of information and Mike 2026 hands it to Mike 2025 when he arrives.  Mike 2025 can now use all that information.

In the case of the movie, when the time tunnel opens for the first time, they could do the same thing: as people come from the future, they just update information. Thirty years after the time tunnel opens, the travelers have all that information and can use it to change missions that failed, and so on, thus changing the future. This, of course, creates the usual time travel mess of changing the future based on information from the future. An analogous problem also arises from bringing objects back from the future that depend on the future to exist. I will use the toxin from the movie to illustrate this old problem.

As mentioned above, Pratt’s character helps create a toxin in the future and brings it back to the past. He is weirdly baffled about how he will get it to the future but decides to not give up the fight. With the help of some others, he manages to determine that the aliens landed long ago and were frozen in the ice (like in the Thing). So, he does the sensible thing: he goes to a government official and tells him he knows where the aliens are and has the toxin to kill them. So, the official does the usual movie thing: he just refuses. So, Pratt and his associates do the usual movie thing and go it on their own. They use the toxin to kill a couple aliens, then blow up the alien ship (so they did not need the toxin). Then Pratt and his dad beat up the female that escapes the ship. The movie ends with everyone being happy. Except, obviously, the aliens and anyone who might have wanted the technology in that ship. Because of this, the tomorrow war never occurs. Which leads to some problems, but I will focus on the toxin.

The toxin only exists because it was created in the future in response to the aliens. To steal from Aquinas who stole from Aristotle, “To take away the cause is to take away the effect.” As such, the defeat of the aliens would mean that the toxin would never exist, it could not be there in the past. Also, going back to the information problem, Pratt only knows about the aliens because of the tomorrow war, which he prevented from happening. They could, of course, have done a “Yesterday’s Enterprise” thing: the whole timeline changes or something. This is just one of the many paradoxes of time travel.

Another approach, which one could mentally write into the movie if one wishes, is that time travel is dimensional travel or creates time-line branches (which is effectively dimensional travel). So, the future Pratt goes to is real and does not change for it is what it is. When he comes back from that future (alternative reality) with the toxin and kills the aliens in his present, this creates a new future timeline for him. This means, of course, that his alternative adult daughter dies in that alternative future, but his new alternative daughter does not, since the war does not happen in the new timeline.

The movie, I think, would have a been a bit more interesting if they used the alternative timeline approach and they could have had a brief moral debate about obligations to help in an alternate future of one’s own reality. Or it could be a plot twist that the people doing the “time travel” knew they were going to another reality but decided to lie about it to get help.

In terms of the quality of the movie as a movie; well, it is what one would expect from either a store-brand Chris Pratt or a name-brand Chris Pratt who really just needs the money.

Some years ago, the right made Dr. Seuss and Mr. Potato Head battles in their manufactured culture war. When Pepe Le Pew was removed from the Space Jam 2 movie, there were cries that the boundary ignoring skunk had been cancelled. As I have noted in previous essays, these are all just examples of companies changing their products. While some attributed this to companies going woke, the more reasonable explanation is that they thought it would be profitable to make these changes and were trying to be smart capitalists. Sometimes their marketing efforts fail, as happened with Bud Light.

If these companies had been coerced into making such changes, then this could have been morally wrong. If the state had tried to impose these changes, then it would be reasonable to raise the 1st Amendment as the state would be forcing companies to change their products and brands against their will. But if the state was not involved, then this Amendment does not apply as private individuals cannot violate this amendment when acting in their private capacity.

If non-state actors coerced these companies, then this could be immoral since using such power to violate rights is usually wrong. For example, an employer using their coercive advantage over their employees to interfere with their freedom of expression, is usually legal but is morally wrong. However, this does not seem to have been the case; no outsider appears to have forced these changes.

It could be argued that the companies were coerced by popular opinion, that the “woke mob” pressured them into making these changes.  But this does not seem morally problematic since consumers have the right to express their values to companies and companies routinely shift their products and brands to meet consumer demand. If companies making changes based on changing values is coercion, then companies would also be coerced as they responded to tastes and styles changing. But we do not think that the decision to stop making Tab was the result of coercion nor do we think that changes in fashion are the result of coercion: styles and tastes change over time and companies change along with them.

One matter that does not seem to be discussed is the remedy the right would want for the alleged harm of cancellation. That is, what should the state do in response to these changes? If there was adequate evidence of illegal coercion, then the state should step in. But there was no evidence of that, these companies seemed fine with the changes they decided to make. It is the right that was outraged, not Hasbro or the estate of Dr. Seuss. Should folks on the right be able to use the coercive power of the state to force these companies to change things back to how they were? In these cases, should laws have been passed requiring that the books be kept in print, that the “Mr. Potato Head” brand be kept and that Le Pew be returned to the movie, and so on for all that was alleged to be cancelled? This would, ironically, seem to be compelled speech and a violation of the first Amendment. If the folks on the right think the companies should have decided; well, they did. They just did not decide the way some of the right wanted at the time. The behavior of the Trump administration and Republican controlled states has shown how much they care about free expression. Based on their behavior, their concern is with ensuring the content they dislike is cancelled and the content they like is either unrestricted or imposed by the coercive power of the state.

A few years ago, the estate of Dr. Seuss decided to pull six books from publication because the works include illustrations that “portray people in ways that are hurtful and wrong.” This was taken by some on the right as an example of “cancel culture” and it became a battleground in the culture war designed to distract from real problems. There was speculation on the motives of the decision makers. They might have been motivated by sincere moral concerns, they might have been motivated by woke marketing (sales did increase after the announcement), or they might have (as the right suggests) yielded to the threat of “cancel culture.” While questions of motives are interesting, my main concern is with the philosophical matter of re-assessing works of the past in the context of current values.

This is not a new problem in philosophy and David Hume addressed the matter long ago. As Hume sees it, we can and should make allowances for some differences between current and past customs. He says, “The poet’s monument more durable than brass, must fall to the ground like common brick or clay, were men to make no allowance for the continual revolutions of manners and customs, and would admit of nothing but what was suitable to the prevailing fashion. Must we throw aside the pictures of our ancestors, because of their ruffs and fardingales?” Hume is right to note that elements of past art will be out of tune with our time and that some of these differences should be tolerated as being the natural and blameless result of shifting customs. Such works can and should still be enjoyed.

As an example, movies made and set in the 1960s will feature different styles of clothing, different lingo, different styles of filming, and so on. But it would be unreasonable to look down on or reject a work simply because of these differences. Hume does, however, note that a work can cross over from having blameless differences in customs to being morally problematic:

 

But where the ideas of morality and decency alter from one age to another, and where vicious manners are described, without being marked with the proper characters of blame and disapprobation; this must be allowed to disfigure the poem, and to be a real deformity. I cannot, nor is it proper I should, enter into such sentiments; and however I may excuse the poet, on account of the manners of his age, I never can relish the composition. The want of humanity and of decency, so conspicuous in the characters drawn by several of the ancient poets, even sometimes by Homer and the Greek tragedians, diminishes considerably the merit of their noble performances, and gives modern authors an advantage over them. We are not interested in the fortunes and sentiments of such rough heroes: We are displeased to find the limits of vice and virtue so much confounded: And whatever indulgence we may give to the writer on account of his prejudices, we cannot prevail on ourselves to enter into his sentiments, or bear an affection to characters, which we plainly discover to be blamable.

 

Hume thus provides a rough guide to the moral assessment of past works: when a work’s content violates contemporary ethics, this is a significant flaw in the work. Hume does note that such works can still have artistic merit, and one can understand that the artist was operating within the context of the values of their time but these flaws are blameworthy and diminish our ability to enjoy the work. Put in marketing terms, the work loses its appeal to the audience. Hume’s view can easily be applied to the Dr. Seuss situation.

When Dr. Seuss created these works, the general customs, and ethics of America (and the world) were different. While there were people who held moral views that condemned racist stereotypes in art, there was a general acceptance of such things. In fact, many people would not even recognize them as being racist at the time they were created. Since I hold to an objective view of morality, I think that racist images have always been wrong, but I do recognize the impact of culture on moral assessment. There are, of course, ethical relativists who hold that morality depends on the culture: so, what was right in the earlier culture that accepted racism would be wrong now in a culture that is more critical of racism.

There are also theories that consider the role of cultural context in terms of what can be reasonably expected of people and that shapes how people and works are assessed. That is, that while morality is not relative, it can be harder or easier to be good in different times and places. So, a person trying to be a decent human being in the 1930s faced different challenges than a person trying to be a decent person in 2025. Harms also need to be taken in context: while racist stereotypes in drawings are seen as very harmful today, in the context of the racism of the past, these drawings would pale in comparison to the harms caused by racist violence and laws. This is not to deny the existence of racist violence today; it is just to put matters in context: things are bad, but not as bad as the past (though the future might be worse).

Whether we think that morality has changed or that more people are moral, these racist stereotypes are now broadly rejected by people who are not racists. As such, it made both moral and practical sense for the estate to take these books out of print. From a practical standpoint, racism can taint a business’ reputation and unless one focuses on marketing to racists (which could be a profitable option) purging racist content makes sense. In terms of ethics, racist images are wrong. One could advance a utilitarian argument here about harm, a Kantian argument about treating people as ends and not means, or many other sorts of arguments depending on what ethical theory you favor. As such, removing the products from sale makes sense, especially since they are books for children. We generally accept that children need more protection than adults. While adults can (sometimes) make informed decision about possible harms from content, children generally have not learned how to do this. So just as we would not allow children access to firearms, alcohol, or pornography, it is ethical for a company to decide to protect them from racism.

While it is tempting to see children’s books as just amusements, children can be profoundly shaped by the content of such works. This is, perhaps, why many parents and groups have been instrumental in making Captain Underpants the most banned (cancelled?) book in America. Just as they are shaped by all their experiences. Children will generally pick up on racist stereotypes and can internalize them. Even if they do not become overt racists, these stereotypes will impact how they think and act throughout their life. As Plato argued, “true education is being trained from infancy to feel joy and grief at the right things.” Our good dead friend Aristotle developed this notion in his Nicomachean Ethics and he makes an excellent case for how people become habituated. Assuming Aristotle got it right, the estate made the right choice in discontinuing these works.

In closing, it is worth wondering why the right was so concerned about these works. If they were consistent defenders of freedom of expression and freedom of choice, then they could argue that they are merely applying their principles of freedom. However, they are not consistent defenders of these freedoms and one must suspect that they are fighting for racism rather than freedom.

Back when Black Lives Mattered, (HBO) Max briefly pulled ‘Gone with the Wind’ from its video library as an indirect response to protests about racism. The movie was later returned with a disclaimer to provide context. This struck a reasonable balance between the aesthetic importance of the work and the moral importance of presenting slavery honestly. The disclaimer also provided context for the film, such as how racism impacted the black actors. Perhaps because of the success of this approach, Max also added a disclaimer to the classic comedy ‘Blazing Saddles’.

This classic comedy-western engages with racism and prominently features racist characters using racist language. But, as the disclaimer noted, it is anti-racist. The racists are the villains. Racism is savagely mocked. As such, it might be wondered why the film required a disclaimer. This seems like putting a disclaimer before ‘All Quiet on the Western Front’ to make it clear that the famously anti-war film is not pro-war.

One concern about putting a disclaimer on a film like ‘Blazing Saddles’ is it could provide ammunition to those saying the “politically correct cancel culture of the left” is out of control. It could be used as “proof” that “the left” is wrong about criticisms of racism in aesthetic works. In reply, the right can create outrage ex nihilo and thus a disclaimer will have no meaningful impact aside from providing a focus of the outrage. That said, the disclaimer might have some impact on those critical enough to check to see if the target of the outrage exists, yet not critical enough to be thorough critics of the outrage.

This might seem a silly concern, but things like a movie disclaimer can  strike “normies” as ridiculous and this can be exploited as part of the radicalization process. The strategy is that an “absurd” response from “the left” can help build a gradual ramp leading into the pit of, for example, racism.

Another concern is the disclaimer might be seen as insulting. It seems to suggest viewers are too stupid to understand the obvious point of the movie, that racism is bad. As a counter, people do misread comedy. A good example of this is the character of Stephen Colbert once played by Stephen Colbert on the Colbert Report. While Colbert is a liberal who mocks conservatives (and liberals) some conservative viewers believed that he was serious about being a conservative. They understood he was doing comedy (the show was on Comedy Central) but did not get his point. While it is anecdotal evidence, I know conservatives who thought this. Since we all enjoyed the show, I was careful not to spoil their fun with the truth.  As such, it is possible that the movie might be seen by some people as endorsing racism

As another example, the 1975 “Germans” episode of “Fawlty Towers” includes the use of the N-word by the character of the major. In the episode, the major corrects someone for using a racist slur by suggesting they use another racist slur. In 2013 the BBC edited the episode to take out the word, which created a negative reaction in some quarters. A few years ago the BBC pulled the episode from streaming.

John Cleese, who played the main character on the show said that the racism of the major was presented in a negative light and that the point of the line was to criticize (with comedy) rather than commit racism. According to Cleese, “You see, what people don’t understand, there’s two ways of criticizing people. One is a direct criticism. And the other is to present their views as they would present them, but to make sure that everyone realizes that the person presenting those rules is a fool. And literal-minded people, who are the curse of the planet, can’t understand that. They think if you say something, you must mean it literally.” The decision makers seem to have come to agree with Cleese.  The episode was eventually restored, but some streaming services included a disclaimer or warning.

While these are only two examples, they do show how people can be mistaken about the intent of comedy. As the “Fawlty Towers” example shows, people can be confused about the intent of the use of racism in comedy. As such, the use of disclaimers even for comedies critical of racism would seem justified. The explanation provided can help people understand the intention of the work and realize that the racism in the comedy is not intended to be racist but is critical of racism. As such, the use of disclaimers could be a reasonable means of preventing such confusion. This benefit must be balanced against the possible harm, though. It can be argued that a reaction from “the left” about a work they mistake as racist would provide even better fodder for the right-wing outrage engines than the disclaimer. If the argument is a good one, then this would serve to justify the use of disclaimers.

As a final point, it is certainly sensible to inform potential viewers about content that they might find problematic, but it might suffice to add a text warning at the start (“contains comedy critical of racism that references racism”) rather than a disclaimer.