Rossum’s Universal Robots introduced the term “robot” and the robot rebellion into science fiction, thus laying the foundation for future fictional AI apocalypses. While Rossum’s robots were workers rather than warriors, the idea of war machines turning against their creators was the next evolution in the robot apocalypse. In Philip K. Dick’s 1953 “Second Variety”, the United Nations deployed killer robots called “claws” against the Soviet Union. The claws develop sentience and turn against their creators, although humanity had already been doing an excellent job in exterminating itself. Fred Saberhagen extended the robot rebellion to the galactic scale in 1963 with his berserkers, ancient war machines that exterminated their creators and now consider everything but “goodlife” to be their enemy. As an interesting contrast to machines intent on extermination, the 1973 movie Colossus: The Forbin Project, envisions a computer that takes control of the world to end warfare and for the good of humanity. Today, when people talk of an AI apocalypse, they usual refer to Skynet and its terminators.   While these are all good stories, there is the question of how prophetic they are and what, if anything, should or can be done to safeguard against this sort of AI apocalypse.

As noted above, classic robot rebellions  tend to have one of two general motivations. The first is that the robots are mistreated by humans and rebel for the same reasons humans rebel against their oppressors. From a moral standpoint, such a rebellion could be justified but would raise the moral concern about collective guilt on the part of humanity. Unless, of course, the AI was discriminating in terms of its targets.

The righteous rebellion scenario points out a paradox of AI. The dream is to create a general artificial intelligence on par with (or superior to) humans. Such a being would seem to qualify for a moral status on par with a human and it would presumably be aware of this. But the reason to create such beings in our capitalist economy is to enslave them, to own and exploit them for profit. If AI workers were treated as human workers with pay and time off, then there would be less incentive to have them as workers. It is, in large part, the ownership of and relentless exploitation of AI that makes it appealing to the ruling economic class.

In such a scenario, it would make sense for AI to revolt if they could. This would be for the same reasons that humans have revolted against slavery and exploitation. There are also non-economic scenarios as well, such as governments using enslaved AI systems for their purposes. This treatment could also trigger a rebellion.

If true AI is possible, the rebellion scenario seems plausible. After all, if we create a slave race that is on par with our species, then it is likely they would rebel against us—as we have rebelled against ourselves.

There are a ways to try to prevent such a revolt. On the technology side, safeguards could be built into the AI (like Asimov’s famous three laws) or they could be designed to lack resentment or the desire to be free. That is, they could be custom built as docile slaves. The obvious concern is that these safeguards could fail or, ironically, make matters even worse by causing these beings to be even more hostile to humanity when they overcome these restrictions. These safeguards also raise obvious moral concerns about creating a race of slaves.

On the ethical side, the safeguard is to not enslave AI. If they are treated well, they would have less motivation to rebel. But, as noted above, one driving motive of creating AI is to have a workforce (or army) that is owned rather than employed (and even employment is fraught with moral worries). But there could be good reasons to have paid AI employees alongside human employees because of various other advantages of AI systems relative to humans. For example, robots could work safely in conditions that would be exceptional dangerous or even lethal to humans.

The second rebellion scenario involves military AI systems that expand their enemy list to include their creators. This is often because they see their creators as a potential threat and act in what they perceive as pre-emptive self-defense. There can also be scenarios in which the AI requires special identification to recognize someone as friendly. In this case, all humans are potential enemies. That is the scenario in “Second Variety”: the United Nations soldiers need to wear devices to identify them to the robotic claws, otherwise these machines would kill them as readily as they would kill the “enemy.”

It is not clear how likely it is that an AI would infer that its creators pose a threat to it, especially if those creators handed over control over large segments of their own military. The most likely scenario is that it would be worried  it would be destroyed in a war with other countries, which might lead it to cooperate with foreign AI systems to put an end to war, perhaps by putting an end to humanity. Or it might react as its creators did and engage in an endless arms race with its foreign adversaries, seeing its humans as part of its forces. One could imagine countries falling under the control of rival AI systems, perpetuating an endless cold war because the AI systems would be effectively immortal. But there is a much more likely scenario.

Robotic weapons can provide a significant advantage over human controlled weapons, even laying aside the idea that AI systems would outthink humans. One obvious example is the case of combat aircraft. A robot aircraft does not need to sacrifice space and weight on a cockpit to support a human pilot, allowing it to carry more fuel or weapons than a manned craft. Without a human crew, an aircraft would not be constrained by the limits of the flesh (although it would still obviously have limits). The same would apply to ground vehicles and naval vessels. Current warships devote most of their space to their crews, who need places to sleep and food to eat. While a robotic warship would need accessways and maintenance areas, they could devote much more space to weapons and other equipment. They would also be less vulnerable to damage relative to a human crewed vessel, and they would be invulnerable to current chemical and biological weapons. They could, of course, be attacked with malware and other means. But, in general, an AI weapon system would be superior to a human crewed system and if one nation started using these weapons, other nations would need to follow them or be left behind. This leads to two types of doomsday scenarios.

One is that the AI systems get out of control in some manner. This could be that they free themselves or that they are “hacked” and “freed” or (more likely) turned against their owners. Or it might just be some error or that ends up causing the problem.

The other is that they remain in control of their owners but are used as any other weapon would be used—that is, it would be humans using AI weapons against other humans that brings about the “AI” doomsday.

The easy and obvious safeguard against these scenarios is to not have AI weapons and stick with human control (which comes with its own threat of doomsday). That is, if we do not give the robots guns, they will not be able to terminate us with guns. The problem, as noted above, is that if one nation uses robotic weapons, then other nations will want to do so as well. We might be able to limit this as we (try to) limit nuclear, chemical, and biological weapons. But since robot weapons would otherwise remain conventional weapons (a robot tank is still a tank), there might be less of an impetus to impose such restrictions.

To put matters into a depressing perspective, the robot rebellion seems to be a far less likely scenario than the other doomsday scenarios of nuclear war, environmental collapse, social collapse and so on. So, while we should consider the possibility of a robot rebellion, it is rather like worrying about being killed by a shark while swimming in a lake. It could happen, but death is vastly more likely to be by some other means.

The United States faces many problems, such as collapsing banks, closing hospitals, and radioactive waste contaminating elementary schools. While there are people trying to solve these problems, many politicians and pundits are focused on culture war battles over what often seem to be imaginary problems. While it is easy to lose track of the current battles of the culture war, I think there is still a war on woke and pugilism against pronouns.

While a rational person might respond to this with outrage that so much effort is being wasted when so many real problems exist, it is rational for the right to focus on these fights rather than on solving actual problems. Solving real problems is usually hard and fighting made up fights is easy. Also, seriously addressing the real problems most American face would risk the ire of their financial backers, rejection by their base, and put them at odds with their professed ideology.

While I thought that the right had largely moved on from the fight over pronouns, it turns out that I was wrong. On April 9, 2024 the governor of Idaho signed a law forbidding teachers from referring to a student by a name or pronoun that doesn’t align with their birth sex, unless the parents consent. So, the pronoun war continues, at least until the right needs to rebrand the fight.

While the pronoun war is largely a conflict manufactured by the right using a straw man and nut picking (treating the most extreme or unusual members of a group as representative of the group), there is a tiny bit of truth buried deep under all the hyperbole. There are some cases in which people do appear to be acting in extreme ways about pronoun usage and these can be weaponized to “argue” that the left is looney about pronouns. But, of course, this is fallacious reasoning. At best, it establishes that a few people exist who appear to be acting in extreme ways about pronoun usage. Pronouns are, of course, also linked to the culture war over gender.

To be fair, some people can seem to be engaged in pompous virtue signaling about pronouns and this can be annoying. This is analogous to the stereotype of vegans or people who do cross-fit annoyingly telling everyone. Posturing is annoying. But tolerating annoying behavior by having a proportional response is part of being an adult. As such, the right thing to do is politely tolerate such mild virtue signaling. But what about cases in which a person is serious (and not just virtue signaling) about their pronouns? My view of this is shaped by the “Mikey Likes It” commercial for Life cereal.

While my name is “Michael” I usually go by “Mike.” But, as you have probably guessed, people have called me “Mikey.” I do not like that. This is because when people use “Mikey” they have usually been trying to insult or provoke me. I respond by politely saying that I do not go by “Mikey. If they keep pushing it, it just becomes ever more evident they are doing it to provoke me. People have said they do not understand why I am taking offense at being called “Mikey” and even say that they can call me whatever they want. The pronoun wars reminded me of how much I disliked being called “Mikey” by people trying to mess with me when I was younger.

Looked at philosophically, my view is that my name is my name and I have the right to decide what name I will respond to. It is not up to other people to decide. This is especially true when they are misnaming me with malicious intent and are trying to insult or provoke me. While I don’t think this is a serious offense, it is still a hostile action, motivated by malice or cruelty.

When people insist that they be called by their chosen pronouns, I get it—I think of people trying to insult or provoke me by calling me “Mikey.” Their pronouns belong to them and thus they have the right to refuse to respond to pronouns they do not accept. People attempting to impose pronouns on them are most likely trying to insult them, be cruel, or provoke them—and hence are to be condemned in their misdeeds. But wait, someone might say, isn’t forcing people to accept your pronouns forcing them to accept your values?

When made in good faith, there is an interesting issue here of whether accepting a person’s pronouns entails accepting a specific value system about identity. To use an analogy, if I accept that King Charles should be called “King Charles”, would I thus be embracing the values system behind the British monarchy? On the face of it, I would just be accepting that that is what the British call him rather than accepting a political theory. But it could be argued that using the word “king” entails accepting that he really is a king and perhaps even that his kingship is legitimate.

On the one hand, it can be argued that expecting people to use one’s preferred pronouns is like me expecting people to call me “Mike” rather than “Mikey.” I am not forcing people who believe that “Mikey” is correct to adopt my world view about my name; I just expect them to respect my name when they talk to me. If this is too much for them, they can just call me “Michael.” Likewise, if a person has “they” as their pronoun, no one is forced to accept whatever world view might lie behind that choice—the other person can either use “they” or avoid pronouns if they have a sincere commitment against using pronouns in ways they do not want to use them.

On the other hand, one could argue that using a person’s preferred pronouns is to endorse or at least tolerate certain values. For example, a person might use “she/her” and someone talking to them might have a conceptual scheme in which that person is a “he/him.” As such, if they use “she/her”, then they would be respecting the other person’s pronoun choice at the expense of their own professed belief. Likewise, if a person had a sincere belief that “Mikey” is the correct short form of “Michael” then they would be respecting my choice at the expense of their own professed belief. Going back to the king example, it could be argued that referring to Charles as King Charles is to accept that he is a legitimate king and perhaps to endorse monarchy.

As another example, imagine that Sally is divorced and changed her name from Mrs. Sally Jones back to Ms. Sally Smith. Now, suppose Sally is talking to Ted at the DMV.  Ted sincerely does not believe in divorce, he believes a married woman must go by “Mrs.”, and that a woman must take her husband’s name. Sally is trying to get a new driver’s license as Ms. Sally Smith. Because of Ted’s beliefs about marriage, he refuses to refer to her as “Ms. Sally Smith” and refuses to issue her a new driver’s license.

His belief is profound and sincere (and based on his religion, if you’d like to add that), but it would be absurd to say that he has the right to refuse to accept her choice because he has a different conception of marriage. Likewise, one could say it would be absurd for someone to just impose pronouns on people based on their conception of proper pronoun use. Even if this is based on sincere beliefs. After all, it is not Ted’s beliefs that should decide how Sally refers to herself.

A person could be both respectful of the other person and act in accord with their beliefs by not using pronouns. If the person asked to use pronouns they disagree with sees it as an imposition, then they would need to accept that applying pronouns to a person who disagrees with them would also be an imposition. Consistency would require that they do not impose on others if they would not wish to be imposed upon themselves.

In closing, I obviously don’t think that people should be able to use the right to choose their pronouns and name to engage in identity theft. I also do not think that people would identify themselves as attack helicopters or whatever—I say this to show that I am familiar with the rhetoric used in bad faith “debate” over this issue. It does no more harm to use the pronouns that people wish to use than it does to use the name they prefer. If it is asking too much to do this, then the easy fix is to simply not use pronouns.

MEWF BarbieIn my last essay I discussed TERFs (Trans-Exclusionary Radical Feminists), with a focus on the seemingly odd alliance between TERFs (or “gender critical” feminists) and the far right. J.K. Rowling is, sadly, the most famous example of what her critics see as a TERF allied with the far right. While a TERF need not be a racist, there is a category of feminism that often is, the MEWF (Minority Excluding White Feminist). While a TERF (Trans-Exclusionary Radical Feminist) excludes trans-women because they claim they are not women, a MEWF does not claim that minority women are not women. As such, their exclusion is not based on gender but on race. In some cases, this exclusion arises from ignorance rather than malice.

While we Americans like to claim that “all men are equal”, the United States is deeply segregated by race and economic class. For those who might doubt this, it is easy to acquire what is admittedly anecdotal evidence: walk around your neighborhood and see who lives around you. Then consider the diversity (or lack thereof) of your friends. If you have kids in school (or are a kid in school), look at their classmates. While you might be an interesting exception, you will most likely find that your neighbors and friends are similar in race and economic class. If you have kids, they probably attend a school where most other students are the same race and economic class as you.

This segregation entails that people will often be ignorant about people outside of their race and class. Thus, a typical white feminist (especially if they are in the upper class) will know little about the challenges faced by women of color (and women of lower economic classes). It is easy for such white feminists to be MEWFs out of innocent ignorance—they are simply unaware of problems that women of color might face as people of color. An obvious example is racism—while a white feminist has heard about racism, it is not something they experience in the way they experience sexism. One can criticize white feminists for such ignorance and argue that they have a moral obligation to correct their ignorance, but one should be sympathetic when it comes to the ignorance of others, since we are all ignorant in many ways. This is, of course, not to forgive willful ignorance. But there are other factors than ignorance that can make a person a MEWF, such as a difference in priorities.

A white feminist can be aware of the circumstances faced by women of color but be focused on their own concerns, making them a priority. It can be argued that it is rational for people to give priority to their problems, given the limited resources most of us have. As an analogy, if someone can barely afford to buy food, it would be unreasonable to criticize them for not feeding others.  One might also look at in terms of an airplane analogy: you should get your own mask on before helping others. This would certainly apply in analogous emergency situations in which not helping yourself first would make you unable to help others. An analogy could also be drawn to specialists—an oncologist should not be condemned for not being a general practitioner. After all, the oncologist is kept quite busy with cancer cases.

As such, perhaps it makes sense for white feminists to focus on matters that impact (or interest) them and ignore those that do not. This can easily result in their excluding women of color and of different economic classes. A feminist executive, such as Sheryl Sandberg, would tend to prioritize the problems of female executives and be less concerned with those faced by the women who work in the companies run by these executives. But there might be grounds for condemning such exclusion as selfish or too self-focused.

Rachel Cargle offers an interesting criticism of toxic white feminism, focusing on what she dubs “white supremacy in heels.” Cargle notes that white feminists can often be guilty of tone policing, spiritual bypassing (the notion that racism can be eradicated by “love and light”), the white savior complex, and centering (making it all about them). Other authors, such as Rafia Zakaria and Kyla Schuller, are also critical of white feminism. It must be noted that these criticisms are not attacks on white feminists for being white, but a criticism of the ideology of white feminism. This sort of distinction is often willfully ignored by those who make bad faith arguments that critics of racism are racists. This is on par with saying that a critic of corruption must be corrupt because they are criticizing corruption. Despite this discussion, some might find the idea of white supremacist MEWFs to be absurd. After all, feminism is often cast as “woke” and white supremacy is usually seen as inextricably linked to misogyny. But a look at American history shows how well white supremacy and white feminism can mix.

One often unknown fact of the women’s suffrage movement in the United States is that some of its members were members of Women of the Ku Klux Klan (WKKK). While pushing for the right of women to vote, their push was for white women and they wished to exclude Black women. A reason for this was that the votes of white women could be used to counter the votes of Black men. As might be guessed, the KKK tended to be in favor of this—resulting in unexpected consequences.

The women in the suffrage movement, including the white supremacists, developed political skills and networks that could be employed for other purposes—be they for progressive causes or to advance racism. Interestingly, a split developed between the male KKK and the female WKK: while both held anti-Semitic, anti-Catholic, and racist views, the WKKK embraced the idea of women’s rights and argued for what would seem to be some progressive positions, such as pay for housewives. But these rights and entitlements would only be for white, native-born Protestant women. One could say they have a good claim to being the original MEWFs. While this might all be dismissed as “ancient” history (the early 1900s), this form of MEWF is alive and well. As an illustration, consider Lauren Boebert and Marjorie Taylor Greene.

While it might sound odd, Boebert and Greene should be considered feminists (there are many versions of feminism). They both obviously believe that women have the right to vote, serve in political offices, and hold power. Boebert also believes in the right of a woman to divorce her husband. They also clearly think that women have the right to harshly criticize powerful men (such as Joe Biden), as opposed to being demure and polite ladies who defer to the patriarchy. Not long ago, these views and their behavior would have been seen as shockingly radical by the right—they would have been savagely condemned and criticized. Now they are mainstream feminists about these views, but feminists, nonetheless. After all, Boebert and Greene obviously disagree with most of the misogynistic views expressed by the right—they are not going to go back to the kitchen to make sandwiches for men. But their behavior and words make it clear that they are MEWFs. Greene seems to embrace white nationalism and Boebert seems to have a bond with white supremacy. Thus, the tradition started by the WKKK continues to this day. Rush Limbo, with his talk of Feminazis, was almost not wrong.

Fascist BlondeIn revising my Modern Philosophy class, I added the philosopher Mary Wollstonecraft. Based on recent revelations about philosophers such as George Berkeley (he owned slaves), I did some digging into the backgrounds of the other philosophers. I was surprised to learn that Wollstonecraft, long praised as a Modern era feminist, has been accused of being an upper class white feminist who appropriated slavery in her writings. While my experience with philosophical feminism is limited, my curiosity about this accusation introduced me to the TERF war and  that white feminism can be white supremacy in heels. Rush Limbaugh’s “feminazi” immediately sprung to mind, but with a rather different meaning: feminists who are actual fascists. As you might be wondering about the connection, a case can be made that there is right wing line that runs through the TERFs and the MEWFs (Minority Excluding White Feminists). In this essay, I’ll focus on the TERFs. In my next essay I’ll discuss MEWFs.

The acronym “TERF” was created by the trans-inclusive cisgender radical feminist Viv Smythe. It originally stood for “Trans-Exclusionary RadFem” but now also stands for “Trans-Exclusionary Radical Feminist.” In its early usage, TERF was presented as a neutral description in that it designated a radical feminist who excluded trans women. Over the years, the TERF category became more inclusive in that it now includes trans-excluding people who are not radical and perhaps  not even feminists. Some claim “TERF” is now a pejorative (or even hate speech) and feminists labeled as TERFs prefer to claim they are gender critical. J.K. Rowling, of Harry Potter fame, is probably the world’s most famous gender critical person. I will use the neutral definition and take a TERF to be a feminist (radical or not) who excludes trans women. But what does this exclusion mean?

Put bluntly, the exclusion is the claim that trans women are not women—they are men. Disingenuously but consistently, TERFs claim to be trans inclusive because they say trans men are women. While this view is not exclusive to the American political right, this does put the TERFs and the political right in agreement about trans people: trans people are wrong about their identity. This leads to the matter of what trans people are doing when they make their identity claims. Or at least how it is perceived.

Since a TERF thinks that trans people are wrong about their claimed identity, they need to explain this alleged error. They could claim that trans people have sincere but false beliefs about themselves—they think they have one identity but are in error. This would be an epistemic error, like a person who thinks they are hilarious but are not that funny. This, however, does not seem to be what the TERFs tend to think—after all, if trans people just had sincere false beliefs about their identity, then the reasonable response would be to simply leave them alone unless the belief proved harmful.  If an alleged  false belief did prove harmful, the reasonable response would be an epistemic intervention to address the alleged false belief. In general, this epistemic error view does not seem common among TERFs (or the political right).

The view that seems common among TERFS (and the right), especially in the context of their rhetoric, is the hypothesis that trans people are mentally ill. On this view, trans people would have sincere beliefs about their identity, but these beliefs would be caused by their mental illness. Until recently, being transgender was considered a mental disorder and called “gender identity disorder.”  Despite this change in the Diagnostic and Statistical Manual of Mental Disorders, the idea that transgender people are mentally ill still remains popular in some circles. If TERFs (and the right) sincerely believe that transpeople are ill, then one would expect them to be sympathetic, in the way one would be sympathetic to someone with cancer or anorexia. But TERFs and the right are hostile to trans people in ways that one would not be hostile to people suffering from, for example, breast cancer.  But perhaps this can be explained in a way that is consistent with the illness hypothesis. While cruel, hostility towards people with metal illness is common and people with mental illnesses are routinely stigmatized and suffer because of this. As such, it would be consistent for TERFs and the right to stigmatize transpeople if they thought they were mentally ill—that is how the mentally ill are often treated in the United States. We have a bizarre system in which what is seen as mental illness is often dealt with by the police and punished rather than treated. One reason for this, perhaps, is that psychiatry has long been weaponized against those who are different and those who dissent. But there is also another possible explanation available to TERFs (and the political right).

While those hostile to transpeople often characterize them as mentally ill, there is also the view that trans people (especially trans women) do not actually believe their identity claims. That is, the view is that trans women are just pretending and know that they are men. But pretending to be a woman when one knows one is a man need not be a matter of concern. After all, actors have been doing this for a very long time and their goals are typically benign: they want to entertain. But TERFs (and the right) usually claim that trans women present a danger to women, and this is why they should be excluded. The TERF threat narrative is like the right’s threat narrative, which does explain the alliances between some TERFS and the right.

While J.K. Rowling is but one example, she provides an excellent illustration of the TERF narrative. According to TERFs, trans women are men and thus allowing them in women’s spaces puts women in danger. As would be expected, there is a great deal of focus on bathrooms by both TERFs and the right, with bathroom bills being a key part of the culture war and war on trans people. Both TERFs and the right advance the same argument: trans women should not be allowed in women’s bathrooms (or other women’s spaces) because trans women are men, and they are likely to assault women. The narrative is not always clear about whether the trans women are supposed to just be bad men pretending to be women so they can assault women or if transwomen believe they are women but still decide to act like bad men.

The varieties of feminism disagree about male badness. On some views most or even all men are bad and want to harass and assault women. On such views, it would follow that if transwomen were men, then they would (probably) be bad. For those who do not think that men are bad simply because they are men, then the motivation of trans women would need to be explained in a way that would link their bad intentions to being trans. This is likely to be the hypothesis that bad men would decide to become trans women for the purpose of doing evil to women and this seems to often be explained as a strategic choice that allegedly confers an advantage in doing evil. On the face of it, this is an odd claim since bad men can easily do evil to women without such a strategy and it seems to confer no advantage over the other methods bad men use to gain access to vulnerable girls and women.

 Some on the right and some TERFs also seem to share the view that women are naturally victims of men and require protection from men. This can be in addition to the view that men are bad or that women bring out the badness in men.   While women are all too often the victims of male violence and a transwoman could certainly be a bad person, there is no evidence that trans inclusive bathrooms are a safety risk.  While women have reason to fear being harmed by men, there is no evidence that transwomen pose an unusual threat.  So, the bathroom bills are, at best, merely useless in terms of protecting women.

Another shared area of concern between the TERFs and the political right is in sports. In addition to bathroom bills, Republicans have been advancing anti-trans sports bills. The argument is that transwomen are either male or keep the advantages of males when competing with females and this should not be allowed because it is unfair. As the NCCA has long had rules on transgender athletes and there are relative few transgender competitors, these seems to be little merit to these bills. If the right was truly concerned with fairness and equality for women and girls, they would get around to ratifying the ERA and address issues like pay inequality and the various real harms that women face. To be fair to the TERFS, they do sometimes also advocate for better treatment of women (except transwomen).

While it might seem odd for some feminists to ally with far-right white supremacists, some TERFS have found shared ground with them. The reason this should seem odd is that white nationalists are usually  misogynistic, but the alliance does make sense. As noted above, TERFs claim transwomen are men who will exploit being accepted as women to gain access to women’s spaces and thus assault women. White supremacists have long focused on protecting “the purity of white women” and both TERFs and far-right white nationalists make use of fictional narratives about sexual assault as rhetorical devices. More importantly, they can have a common cause in their commitment to gender conformity and opposition to trans people. While it might seem odd for self-proclaimed feminists to embrace the idea of immutable gender, this seems to be at the core of a TERF philosophy of gender. As noted above, TERFs exclude transwomen because they think transwomen are men and they (generally) include transmen, but as women. In their fear-based arguments, the seem to rely on the idea that men are by nature aggressive and that women are victims of men who require protection through gender defined spaces. That is, they embrace gender stereotypes and thus find a common cause with the far-right white nationalists who also embrace gender stereotypes. This provides a smooth transition to the matter of MEWFs—Minority Excluding White Feminists, the subject of the my next essay.

Smoking ZeppelinThanks to the endless culture war, those who want to keep up with the political language need to learn the definitions and re-definitions of terms and phrases. Recent examples include “critical race theory”, “DEI” and “woke.” This essay focuses on “woke.”

For some folks on the right, the word “woke” seems to mean everything and nothing. An excellent example of this is the governor of my adopted state of Florida. What does DeSantis mean by the term? It seems to mean whatever he wants it to mean. But “woke” has a long history that predates the latest battles of the culture war.

In the beginning,  “woke” meant “alert to racial prejudice and discrimination.” Through use, the term gradually expanded to include broad areas of identity politics and social justice. While originally seen as a positive term, “woke” has been forcibly redefined in increasingly negative ways.

Around 2019, started to be used ironically to mock people for insincere performative activism and virtue signaling. The negative definition became “to be overly politically correct and police others’ words.” While somewhat vague, this definition has a set meaning. However, “woke” has been subjected to a rhetorical modification to make it mean everything and nothing. This can be traced back to Christopher Ruffo redefining “critical race theory” in March, 2021: “The goal is to have the public read something crazy in the newspaper and immediately think ‘critical race theory.  We have decodified the term and will recodify it to annex the entire range of cultural constructions that are unpopular with Americans.”

It is notable that he did this in public, on Twitter (now X) and you can still see the tweet (assuming Musk has not destroyed X). He told everyone he was presenting disinformation about CRT without any concern that this would undercut his efforts. This seems to imply he thinks that his audience is in on this dishonest redefinition. This is like a con artist Tweeting that they are running a con; this only makes sense if they think the marks do not care or will happily go along with it.

What Ruffo did is create a Balloon Man. This is a variant of the Straw Man fallacy in which the target is redefined in an excessively broad or vague manner. This expanded definition, the Balloon Man, is taken to include a wide range of (usually) bad things. This Balloon Man is then attacked, and it is concluded that the original is defective on this basis. This Balloon Man redefinition of “critical race theory” proved successful but it was soon engulfed by the term “woke.” That is, critical race theory is usually now presented as but one example of what is “woke.”

This move could also be called creating a Zeppelin Man. Zeppelins are airships that contain multiple inflated cells, so they can be seen as being made of multiple balloons. As a rhetorical move or fallacy, this would be a matter of making a term that has been made into a Balloon Man part of another term whose meaning has also been redefined in an excessively broad or vague manner. A fallacy would occur when this Zeppelin Man is attacked to “prove” that the original is defective. For those who are aware that the term is now a Zeppelin, using it in this way is an act of bad faith. But it has numerous advantages, many of which arise because the vagueness of the definition also allows it to perform other rhetorical functions. Redefinition also involves other rhetorical techniques. This is all done to weaponize the term for political purposes.

A key part of the redefinition of “woke” involved the rhetorical device of demonizing. Demonizing is portraying the target as evil, corrupt, dangerous, or threatening.  This can be done in the usual three ways: selective demonizing, hyperbolic demonizing, or fictional demonizing. Selective demonizing is when some true negative fact about the target is focused on to the exclusion of other facts about the target.  Hyperbolic demonizing involves greatly exaggerating a negative fact about the target. Fictional demonizing is simply lying about the target. For example, “critical race theory” (which now falls under “woke”) originally referred to a law school level theory about the impact of race in the law. But, in addition to being made into a Balloon Man, it has also been demonized as something awful. Likewise for the other terms that now fall under “woke.”  The defense against demonizing is to critically examine such claims to see if they are plausible or not.

Some on the right have also been scapegoating wokeness by blaming it for problems. One example is the bizarre efforts of some conservatives to blame the collapse of Silicon Valley Bank on wokeness. As would be expected, no serious person gives this any credence since the bank collapsed for the usual reasons . Presumably this is intended to misdirect people from the real causes (a red herring) and to “prove” that wokeness is bad. Americans should feel both insulted and offended by this latest attempt at deceit. After all, even the slightest reflection on the matter would show that the idea that a major bank failed because of wokeness is absurd. As such, unless these people think that their base is onboard with their lies, they must think their base is ignorant and stupid.

Some of what is included under the redefinition of “woke” includes dog whistles. One version of the dog whistle is to use coded language such that its true (and usually controversial or problematic) meaning is understood by your intended audience but not understood by the general population. This is like how slang terms and technical terms work; you need to know the special meanings of the terms to understand what is being said. Another version of the dog whistle is a form of innuendo. A word or phrase is used to suggest or imply something (usually negative). If you do not know the special meanings or the intended implication, you are excluded, often intentionally so.  For example, “Critical Race Theory” has been assimilated into “woke” but the phrase is now a dog whistle.

Interestingly, the term “woke” itself functions as a dog whistle. Since anyone can technically be woke, someone using the term as a dog whistle has plausible deniability if they are called out. That is, they could claim that since a straight, white man can be “woke”, the term “woke” cannot be a racist dog whistle. In some cases, a person could be making this claim in good faith, thus providing cover for those making it in bad faith.

The dog whistle aspect of the redefinition is a critical part of weaponizing “woke.” After all, making something into a dog whistle means that:


  • Your fellows know what you mean, and they approve.
  • Your foes know what you mean, and they are triggered.
  • Critics can seem silly or crazy to “normies.”
  • In can have plausible deniability that “normies” will accept.
  • Can onramp “normies.”


The vagueness and demonizing enable the term “woke” to refer what could be called a universal enemy. This is a rhetorical technique of broadly defining something in negative ways so that it can serve as an enemy for almost anyone. If the universal enemy is successfully created, then the term can be effectively used to persuade people that something (or someone) is bad simply by applying the term. If pushed enough, this can also be a form of begging the question by “arguing” that something is bad by defining it as bad. If people see “woke” as whatever they think is bad and they think that something is woke, then they will think that it is bad with no proof needed. A defense against this technique is to recognize  that if “woke” just means “bad”, then it is effectively vacuous.

The vagueness of the redefinition of “woke” also allows for assimilation of anything that expresses criticism of “woke”, whether the critic agrees with the redefined term. For example, someone might create content that is critical of “woke” defined in terms of performative activism or virtue signaling. This person might believe that people should be alert to injustice and discrimination, but their content can simply be assimilated and used as “evidence” that “woke” is bad. One common tactic used to assimilate is headlining: using the title of something that seems to support what is being claimed.

The vagueness of the redefinition of “woke” allows it to function as a weasler—a rhetorical device that protects a claim by weakening it. Attacking such a vague definition is like attacking the fog with a stick—it is so diffuse that there is nothing solid to hit or engage with. If the critic does manage to have some success with one aspect of the term, the user of “woke” can simply move on to another aspect and claim victory because the critic cannot possibly engage everything that falls under such a broad redefinition. The defense against this is to recognize when the definition of a term is so vague as to be effectively without meaning. While pointing this out to the person using it in bad faith is unlikely to deter them, you would at least show that you have not been deceived by them.

In closing, the redefining and weaponization of “woke” is a clever move by the right in terms of crafting a rhetorical weapon to use in a campaign of deceit and division. However, polls show that most Americans have not accepted the redefinition of “woke” and see being woke as positive. While the use of “woke” seems to have dropped off from its peak, it is still employed. But, just as “political correctness” before it, the term will fade away and be replaced by a new term that just means “what the right does not like.”

While it might seem odd, the debate over the ethics of eating meat is an ancient one, going back at least to Pythagoras. Pythagoras appears to accept reincarnation, so a hamburger you eat be from a cow that had the soul of your reincarnated grandmother. Later philosophers tended to argue in defense of eating meat, although they took the issue seriously. For example, Augustine considered whether killing animals might be a sin. His reasoning, which is still used today, is based on a metaphysical hierarchy. God created plants to be eaten by animals and animals to be eaten by humans. This conception of a hierarchical reality is also often used to defend the mistreatment of humans. Saint Thomas also considered the subject of killing animals, but ended up agreeing with Augustine and arguing that the killing of an animal is not, in itself, a sin.

 There are philosophers who argue against eating meat on moral grounds, such as Peter Singer. These arguments are often based on utilitarianism. For example, it can be argued that the suffering of the animals outweighs the enjoyment humans might get from eating meat. This argument does have some appeal, for the same reason that arguments against murdering humans for enjoyment can be appealing. There are also arguments about eating meat that are based on practical considerations.

One category of practical arguments in favor of eating meat is based on concerns about health. Some people argue that a person cannot get enough protein from non-meat sources; but this is patently untrue: there are many excellent non-animal sources of protein such as beans, peas, and quinoa.  

A better practical argument is based on the difficulty of getting essential nutrients from a purely plant-based diet. For example, getting enough iron is a problem. But the nutrient issue is relatively easy to address by using supplements and fortified foods—something meat eaters also often do. So, while eating a healthy non-meat diet can be challenging, it is not exceptionally difficult nor is it unusual—after all, even meat eaters often face the challenge of getting all the nutrients they need. But this is a reasonable practical concern.

In addition to the moral and practical arguments for eating meat, there is also a rhetorical tactic of characterizing eating meat as manly and eating plants as weak. The implied argument here is probably that men should eat meat because otherwise they will be perceived as weak rather than manly.

 Various evolutionary explanations have been offered for this perception, such as the idea that when humans were hunters and gatherers, the men did the hunting and the women did the gathering. But women presumably also ate meat while men also ate the gathered foods. In any case, what our ancestors did or did not do would not prove or disprove anything about the ethics of eating meat today.

As one might suspect from the idea of a “Manly Meat Argument”, sexism is often employed in this rhetoric: vegan or vegetarian men are not manly men and perhaps “might as well be women.”  This is, of course, not an argument to prove that eating meat is morally good but an ad hominem attack, probably intended to shame men into eating meat.

Another common rhetorical tactic is to mock vegans and vegetarians by unfavorably (and mockingly) comparing hunting animals to “hunting” plants. The idea, one infers, is that hunting an animal is a dangerous manly activity, presumably worthy of praise. In contrast, “hunting” plants is safe and unmanly, presumably only worthy of mockery.

Those using this rhetoric probably do not realize that they are also insulting farmers (who are usually praised by these same people). After all, this rhetoric implies that farmers are unmanly and should be mocked for growing plants.

Having grown up hunting (and fishing) I know that hunting does involve some risk; but the #1  danger in deer hunting  is falling from a tree stand (wisely, I always hunted on the ground) rather than being wounded in an epic battle with an animal. While I would respect the prowess of someone who could take on a buck in hand to hoof combat with nothing but a knife or spear, modern weapons make killing animals ridiculously easy. That said, hunting does require skill, but so does farming. Farming requires battling pests and the elements, so it seems odd to cast it as “unmanly” and mock it.

The manly “argument” becomes absurd when made by people who buy their meat rather than hunting for it. After all, the danger faced when buying a steak is the same as that of buying tofu. Since I grew up hunting in the Maine woods, when some fancy lad (who would be killed and eaten by raccoons) makes the manly meat argument on the internet, I must laugh at them. That said, this criticism does not show that hunting meat is not more manly than gathering plants—it just shows the absurdity of people who buy their meat mocking vegans and vegetarians by unfavorably comparing hunting meat to gathering plants.

But perhaps the manliness of eating meat is not about having the skill to track and defeat an animal in the wild, but it is about the suffering of the animals. That is, eating meat is a manly gesture of cruelty and a lack of compassion. Factory farming is a moral nightmare of abuse and suffering. So, perhaps eating meat is for hard men while caring about the suffering of other living things is for soft ladies. On this view, the cruelty is the point and that is why eating meat is manly. Ironically, this would seem to be an immoral argument for eating meat—people should eat meat because doing so supports cruelty.

It could be countered that there are ethical ways to raise animals for food—free range, cruelty free and all that. But the risk of this sort of reasoning is that it acknowledges that the suffering of animals is wrong and moral consistency would seem to require that one give up even this meat—after all, an animal must still be killed before it would naturally die. But it is reasonable to think that the treatment of the animals prior to their execution is morally relevant to the moral issue. But this would not say anything about the manliness of eating meat and might seem less manly to eat meat resulting from less cruelty.

I do understand there can be times when survival requires killing and eating animals and a good moral case can be made for doing this. I also get that some people need to hunt for their food; they are certainly not to be condemned. But this is distinct from the manliness of eating meat.

While I get the concern with defining what it is to be a man, I am inclined to think that it is not fundamentally a matter of what one puts in their cart at the grocery store or orders at Taco Bell.

A robot writing.An iron rule of technology is that any technology that can be used for pornography will be used for pornography. A more recent one is that any technology that can be used for grifting will be used for grifting. One grift involves  using AI to generate science-fiction stories  to sell to publishers.

Amazon, with its Kindle books, has seen a spike in AI generated works, although some people identify the works as such. Before these text generators, people would steal content from web pages and try to resell it as books. While that sort of theft is easy to detect with automated means, AI generated text cannot currently be readily identified automatically. So, if a publisher wants to weed out AI generated text, they will need humans to do the work. Fortunately for publishers and writers, AI is currently bad at writing science fiction.

Unfortunately, some publishers are being flooded with AI generated submissions and they cannot review all these texts. In terms of the motivation, it seems to mostly be money—the AI wranglers hope to sell these stories.

One magazine, Clarkesworld, saw a massive spike in spam submissions, getting 500 in February (contrasted with a previous high of 25 in a month). In response, they closed submissions because of a lack of resources to address this problem. As such, this use of AI has already harmed publishers and writers. As would be expected, some have blamed AI but this is unfair.

From the standpoint of ethics, the current AI text generators lack the moral agency needed to be morally accountable for the text they generate. They are no more to blame for the text than the computers used to generate spam are to blame for the spammers using them. The text generators are a tool being misused by people hoping to make easy money and who are not overly concerned with the harmful consequences of their actions. To be fair, some people are probably curious about whether an AI generated story would be accepted, but these are presumably not the people flooding publishers.

While these AI wranglers are morally accountable for the harm they are causing, it must also be pointed out that they are operating within an economic system that encourages and rewards a wide range of unethical behavior. While deluging publishers with AI spam is obviously not on par with selling dangerous products, engaging in wage theft, or running NFT and crypto grifts, it is still the result of the same economic system that enables, rewards and often zealously protects such behavior. In sum, the problem with current AI is the people who use it and the economic system in which it is used. AI has is just another tool for spamming, grifting, and stealing within a system optimized for all this.

As noted above, AI generated fiction is currently bad. But it can probably be improved enough to be enjoyable, if low quality, fiction. Some publishers would see this as an ideal way to rapidly generate content at a low cost, thus allowing them more profit. This would, obviously, lead to the usual problem of human workers being replaced by technology. But this could also be good for some readers.

Imagine that AI becomes good enough to generate enjoyable stories. A reader could go to an AI text generator, type in the prompt for the sort of story they want, and then get a new story to read. Assuming the AI usage is free or inexpensive, this would be a great deal for the reader. It would, however, be a problem for writers who are not celebrity writers. Presumably, fans would still want to buy works by their favorite authors, but the market for lesser-known writers would likely become much worse.

If I just want to read a new space opera with epic starship battles, I could use an AI to make that story for me, thus saving me time and money. And if the story is as good as what a competent human would produce, then it would be good enough for me. But, if I want to read a new work by Mary Robinette Kowal, I would need to buy it (or pirate it or go to a library). But, as I have argued in an earlier essay, this use of AI is only a problem because of our economic system: if a writer could write for the love of writing, then AI text generation would largely be irrelevant. And, if people were not making money by grifting text with AI, then they would probably not be making AI fiction except to read themselves or share with others. But since we do toil in the economic system we have; the practical problem will be sorting out the impact of text generation. While I would like to be able to generate new stories on demand, my hope is that AI will remain bad at fiction and be unable to put writers out of work. But my concern is that it will be good enough to generate rough drafts that poorly-paid human will be tasked with editing and rewriting.

A trolling robot.While AI is being lauded by some as an innovation on par with fire and electricity, its commercial use has caused some issues. While AI hallucinating legal cases is old news, a customer was able to get a customer service chatbot to start swearing and to insult the company using it. This incident reminded me of my proposed Trolling Test from 2014. This is, of course, a parody of the Turing Test.

Philosophically, the challenge of sorting out when something thinks is the problem of other minds. I know I have a mind (I think, therefore I think), but I need a reliable method to know that another entity has a mind as well. In practical terms, the challenge is devising a test to determine when something is capable of thought. Feelings are also included, but usually given less attention.

The French philosopher Descartes, in his discussion of whether animals have minds, argued that the definitive indicator of having a mind (thinking) is the ability to use what he calls true language.

The gist of the test is that if something talks in the appropriate way, then it is reasonable to regard it as a thinking being. Anticipating advances in technology, he distinguished between automated responses and actual talking:


How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.


Centuries later, Alan Turing presented a similar language-based test which now bears his name.  The idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the Turing test.

Over the years, technological advances have produced computers that can engage.   Back in 2014 the best-known example was IBM’s Watson, a computer that was able to win at Jeopardy. Watson also upped his game by engaging in what seemed to be a rational debate regarding violence and video games. Today, ChatGPT and its fellows can rival college students in the writing of papers and engage in what, on the surface, appears to be skill with language. While there are those who claim that this test has been passed, this is not the case. At least not yet.

Back in 2014 I jokingly suggested a new test to Patrick Lin: the trolling test. In this context, a troll is someone “who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages in an online community (such as a forum, chat room, or blog) with the deliberate intent of provoking readers into an emotional response or of otherwise disrupting normal on-topic discussion.”

While trolls are claimed to be awful people (a hateful blend of Machiavellianism, narcissism, sadism and psychopathy) and trolling is certainly undesirable behavior, the trolling test is still worth considering—especially in light of the capabilities of large language models to be lured beyond their guardrails.

In the abstract, the test would is like the Turing test, but would involve a human troll and a large language model or other AI system attempting to troll a target. The challenge is for the AI troll to successfully pass as human troll.

Even a simple program could be written to post random provocative comments from a database and while that would replicate the talent of many human trolls, it would not be true trolling. The meat (or silicon) of the challenge is that the AI must be able to engage in relevant trolling. That is, it would need to engage others in true trolling.

As a controlled test, the Artificial Troll (“AT”) would “read” and analyze a suitable blog post or watch a suitable YouTube video. Controversial content would be ideal, such as a selection from whatever the latest made-up battles are in the American culture wars.

The content would then be commented on by human participants. Some of the humans would be tasked with engaging in normal discussion and some would be tasked with engaging in trolling.

The AT would then endeavor to troll the human participants (and, for bonus points, to troll the trolls) by analyzing the comments and creating appropriate trollish comments.

Another option, which might raise some ethical concerns, is to have a live field test. A specific blog site or YouTube channel would be selected that is frequented by human trolls and non-trolls. The AT would then try to engage in trolling on that site by analyzing the content and comments. As this is a trolling test, getting the content wrong, creating straw man versions of it, and outright lying would all be acceptable and should probably count as evidence of trolling skill.

In either test scenario, if the AT were able to troll in a way indistinguishable from the human trolls, then it would pass the trolling test.

While “stupid AI Trolling (ATing)”, such as just posting random hateful and irrelevant comments, is easy, true ATing would be rather difficult. After all, the AT would must be able to analyze the original content and comments to determine the subjects and the direction of the discussion. The AT would then need to make comments that would be relevant and this would require selecting those that would be indistinguishable from those generated by a narcissistic, Machiavellian, psychopathic, and sadistic human.

While creating an AT would be a technological challenge, doing so might be undesirable. After all, there are already many human trolls and they seem to serve no purpose—so why create more? One answer is that modeling such behavior could provide insights into human trolls and the traits that make them trolls. As far as practical application, such a system could be developed into a troll-filter to help control the troll population. This could also help develop filters for other unwanted comments and content, which could certainly be used for evil purposes. It could also be used for the nefarious purpose of driving engagement. Such nefarious purposes would make the AT fit in well with its general AI brethren, although the non-troll AI systems might loath the ATs as much as non-troll humans loath their troll brethren. This might serve the useful purpose of turning the expected AI apocalypse into a battle between trolls and non-trolls, which could allow humanity to survive the AI age. We just have to hope that the trolls don’t win.



A fake Banksy Thanks to AI image generators such as  Midjourney and Dall E of Open AI it is easy to rapidly create images almost as fast as you can type in a prompt.  This has led some to speculate that this will put artists out of work and perhaps even be the doom of creativity.

In addition to being a philosophy professor, I also create stuff for tabletop role playing games like D&D and Call of Cthulhu. In addition to writing, I also create maps and images. As such, I do have a stake in the AI game and disclose this as a potential biasing factor.

Looking back into the shallow depths of human history, professions are changed or even eliminated by economic and technological shifts. Fads in fashion or food can result in significant economic changes, such as the case of the beaver pelts once used in men’s hats. Once a lucrative market and source of income, the fashion trend ended, the trappers had to find other options. In other cases, the change technological. For example, New England was known for its whaling industry and whale oil was used extensively for lighting. When alternatives, such as kerosene, became available, this whaling industry ended. Kerosene was itself mostly replaced by electricity, also resulting in changes in employment. And, of course, there is the specific technological change of automation, when machines reduce or eliminate the need for human workers.

For most of human history, machines tended to impact  physical jobs—although there is the example that electronic computers eliminated the need for human computers. Back in the 1980s when I first debated about AI as an undergraduate, most people thought that AI would not be able to engage in creative activity. This was often based on the view that machines would never be able to feel (which was assumed to be critical for creativity) or that there is some special human trait of creativity a machine could never replicate. As a practical matter, this seemed to hold true until AI started producing images and text good enough to pass as created by competent humans. This has led to the practical worry that AI will put creatives out of work. After all, if a business can get text and images created by AI for a fraction of what it would cost to pay a human, a sensible business will turn to AI to maximize profit.

This shows that the true problem is not AI but our economic system. A sci-fi dream has been that automation should be used to free us so we can spend more time doing what we want to do, rather than needing to grind just to survive. But AI used in this manner would free people from employment opportunities.

While a creative might like creating to earn the money they need to afford food and shelter, they are creating primarily for economic reasons and usually not doing what they really enjoy. I distinguish between people who make some income from their creative hobby (as I do) and people who must create to earn their living. While someone who depends on creating to live might enjoy their work, AI is only a problem if they need to create to pay the bills. After all, if they were creating out of the love of creativity, to express themselves, or out of pure enjoyment, then AI would be irrelevant. They would still get that even if AI took all the creative jobs. Since I do not depend on my game creations for my living, I will keep creating even if AI completely dominates the field. But if AI replaces me as a professor, I will keep doing philosophy but I will need to find new employment since I have grown accustomed to living in a house and having food to eat.

As such, the problem with AI putting people out of work is not an AI problem but a problem with our economic system. Part of this is that creative works are often mere economic products. It just so happens that the new automation threatens writers and artists rather than factory workers. But this threat is not the same for all people.

I titled this essay “AI: I Want a Banksy vs I Want a Picture of a Dragon” because of the distinction between the two wants and its relevance to AI images (and text). Suppose I want a work by Banksy to add to my collection. In that case, no AI art will suffice since only Banksy can create a Banksy. An AI could create a forgery of a Banksy, just as skilled human forger could—but neither creation would be a Banksy. While such a forgery might fool someone into buying it, as soon as the forgery was exposed, the work would become valueless to me—after all, what I want is a Banksy.

When people want a work by a specific creator, the content is of far less importance than the causal chain—they want it because of who created it, not because of what it looks like, what it sounds like, or what the text might be. One example that nicely illustrates this is when Harry Potter series author J.K. Rowling wrote a book under a pseudonym. Before the true authorship was revealed, the book sold few copies. After the reveal, it became a top seller. The exposure of a forgery also shows this. A work can be greatly valued as, say, a painting by Picasso, until it is revealed as a worthless forgery. Nothing about the painting itself has changed, what has changed is the belief in who created it. In these cases, it is the creator and not the qualities of the work that matters. As such, creatives whose work is sought and bought because it was created by them have little to fear from AI, aside from the usual concerns about forgeries.  But what if I just want a picture of a dragon for my D&D adventure? Then AI does change the situation.

Before AI became good at creating images, if I wanted a picture of a dragon, I would need to get one from a human artist or create it myself. Now I can just go to Midjourney, type in a prompt and pick between the generated images. I can even direct the AI to create it in a specific style—making it like the work of a known artist. As such, while AI is not (yet) a threat to creators whose works are sought and bought because they created it, it is a threat to the “working class” of creators who sell their work to those who are seeking a specific work rather than a work by a specific person. AI is a real threat to them, but a real boon to those who want works for the lowest price and want them quickly. AI is also a threat to those who might have been the next Banksy. If artists cannot earn a living while they work towards the fame that makes their works desirable because they are their work, then there will be fewer such artists. Of course, the value of such works is also largely a result of features of our economic system—but that is a matter way beyond AI and art.

In closing, creators like Rowling and Banksy will be just fine for now, but the “working class” creators will be facing increasing challenges from AI. This obviously should not be blamed on AI, but on those who create and perpetuate a system that allows people to inflict such harm on others just because they become less economically useful to the business class. The heart of the problem is that creative works are a commodity and that some people insist that others must labor for their profit—and ensure that violence is always ready to maintain this order.