While it might seem odd, the debate over the ethics of eating meat is an ancient one, going back at least to Pythagoras. Pythagoras appears to accept reincarnation, so a hamburger you eat be from a cow that had the soul of your reincarnated grandmother. Later philosophers tended to argue in defense of eating meat, although they took the issue seriously. For example, Augustine considered whether killing animals might be a sin. His reasoning, which is still used today, is based on a metaphysical hierarchy. God created plants to be eaten by animals and animals to be eaten by humans. This conception of a hierarchical reality is also often used to defend the mistreatment of humans. Saint Thomas also considered the subject of killing animals, but ended up agreeing with Augustine and arguing that the killing of an animal is not, in itself, a sin.

 There are philosophers who argue against eating meat on moral grounds, such as Peter Singer. These arguments are often based on utilitarianism. For example, it can be argued that the suffering of the animals outweighs the enjoyment humans might get from eating meat. This argument does have some appeal, for the same reason that arguments against murdering humans for enjoyment can be appealing. There are also arguments about eating meat that are based on practical considerations.

One category of practical arguments in favor of eating meat is based on concerns about health. Some people argue that a person cannot get enough protein from non-meat sources; but this is patently untrue: there are many excellent non-animal sources of protein such as beans, peas, and quinoa.  

A better practical argument is based on the difficulty of getting essential nutrients from a purely plant-based diet. For example, getting enough iron is a problem. But the nutrient issue is relatively easy to address by using supplements and fortified foods—something meat eaters also often do. So, while eating a healthy non-meat diet can be challenging, it is not exceptionally difficult nor is it unusual—after all, even meat eaters often face the challenge of getting all the nutrients they need. But this is a reasonable practical concern.

In addition to the moral and practical arguments for eating meat, there is also a rhetorical tactic of characterizing eating meat as manly and eating plants as weak. The implied argument here is probably that men should eat meat because otherwise they will be perceived as weak rather than manly.

 Various evolutionary explanations have been offered for this perception, such as the idea that when humans were hunters and gatherers, the men did the hunting and the women did the gathering. But women presumably also ate meat while men also ate the gathered foods. In any case, what our ancestors did or did not do would not prove or disprove anything about the ethics of eating meat today.

As one might suspect from the idea of a “Manly Meat Argument”, sexism is often employed in this rhetoric: vegan or vegetarian men are not manly men and perhaps “might as well be women.”  This is, of course, not an argument to prove that eating meat is morally good but an ad hominem attack, probably intended to shame men into eating meat.

Another common rhetorical tactic is to mock vegans and vegetarians by unfavorably (and mockingly) comparing hunting animals to “hunting” plants. The idea, one infers, is that hunting an animal is a dangerous manly activity, presumably worthy of praise. In contrast, “hunting” plants is safe and unmanly, presumably only worthy of mockery.

Those using this rhetoric probably do not realize that they are also insulting farmers (who are usually praised by these same people). After all, this rhetoric implies that farmers are unmanly and should be mocked for growing plants.

Having grown up hunting (and fishing) I know that hunting does involve some risk; but the #1  danger in deer hunting  is falling from a tree stand (wisely, I always hunted on the ground) rather than being wounded in an epic battle with an animal. While I would respect the prowess of someone who could take on a buck in hand to hoof combat with nothing but a knife or spear, modern weapons make killing animals ridiculously easy. That said, hunting does require skill, but so does farming. Farming requires battling pests and the elements, so it seems odd to cast it as “unmanly” and mock it.

The manly “argument” becomes absurd when made by people who buy their meat rather than hunting for it. After all, the danger faced when buying a steak is the same as that of buying tofu. Since I grew up hunting in the Maine woods, when some fancy lad (who would be killed and eaten by raccoons) makes the manly meat argument on the internet, I must laugh at them. That said, this criticism does not show that hunting meat is not more manly than gathering plants—it just shows the absurdity of people who buy their meat mocking vegans and vegetarians by unfavorably comparing hunting meat to gathering plants.

But perhaps the manliness of eating meat is not about having the skill to track and defeat an animal in the wild, but it is about the suffering of the animals. That is, eating meat is a manly gesture of cruelty and a lack of compassion. Factory farming is a moral nightmare of abuse and suffering. So, perhaps eating meat is for hard men while caring about the suffering of other living things is for soft ladies. On this view, the cruelty is the point and that is why eating meat is manly. Ironically, this would seem to be an immoral argument for eating meat—people should eat meat because doing so supports cruelty.

It could be countered that there are ethical ways to raise animals for food—free range, cruelty free and all that. But the risk of this sort of reasoning is that it acknowledges that the suffering of animals is wrong and moral consistency would seem to require that one give up even this meat—after all, an animal must still be killed before it would naturally die. But it is reasonable to think that the treatment of the animals prior to their execution is morally relevant to the moral issue. But this would not say anything about the manliness of eating meat and might seem less manly to eat meat resulting from less cruelty.

I do understand there can be times when survival requires killing and eating animals and a good moral case can be made for doing this. I also get that some people need to hunt for their food; they are certainly not to be condemned. But this is distinct from the manliness of eating meat.

While I get the concern with defining what it is to be a man, I am inclined to think that it is not fundamentally a matter of what one puts in their cart at the grocery store or orders at Taco Bell.

A robot writing.An iron rule of technology is that any technology that can be used for pornography will be used for pornography. A more recent one is that any technology that can be used for grifting will be used for grifting. One grift involves  using AI to generate science-fiction stories  to sell to publishers.

Amazon, with its Kindle books, has seen a spike in AI generated works, although some people identify the works as such. Before these text generators, people would steal content from web pages and try to resell it as books. While that sort of theft is easy to detect with automated means, AI generated text cannot currently be readily identified automatically. So, if a publisher wants to weed out AI generated text, they will need humans to do the work. Fortunately for publishers and writers, AI is currently bad at writing science fiction.

Unfortunately, some publishers are being flooded with AI generated submissions and they cannot review all these texts. In terms of the motivation, it seems to mostly be money—the AI wranglers hope to sell these stories.

One magazine, Clarkesworld, saw a massive spike in spam submissions, getting 500 in February (contrasted with a previous high of 25 in a month). In response, they closed submissions because of a lack of resources to address this problem. As such, this use of AI has already harmed publishers and writers. As would be expected, some have blamed AI but this is unfair.

From the standpoint of ethics, the current AI text generators lack the moral agency needed to be morally accountable for the text they generate. They are no more to blame for the text than the computers used to generate spam are to blame for the spammers using them. The text generators are a tool being misused by people hoping to make easy money and who are not overly concerned with the harmful consequences of their actions. To be fair, some people are probably curious about whether an AI generated story would be accepted, but these are presumably not the people flooding publishers.

While these AI wranglers are morally accountable for the harm they are causing, it must also be pointed out that they are operating within an economic system that encourages and rewards a wide range of unethical behavior. While deluging publishers with AI spam is obviously not on par with selling dangerous products, engaging in wage theft, or running NFT and crypto grifts, it is still the result of the same economic system that enables, rewards and often zealously protects such behavior. In sum, the problem with current AI is the people who use it and the economic system in which it is used. AI has is just another tool for spamming, grifting, and stealing within a system optimized for all this.

As noted above, AI generated fiction is currently bad. But it can probably be improved enough to be enjoyable, if low quality, fiction. Some publishers would see this as an ideal way to rapidly generate content at a low cost, thus allowing them more profit. This would, obviously, lead to the usual problem of human workers being replaced by technology. But this could also be good for some readers.

Imagine that AI becomes good enough to generate enjoyable stories. A reader could go to an AI text generator, type in the prompt for the sort of story they want, and then get a new story to read. Assuming the AI usage is free or inexpensive, this would be a great deal for the reader. It would, however, be a problem for writers who are not celebrity writers. Presumably, fans would still want to buy works by their favorite authors, but the market for lesser-known writers would likely become much worse.

If I just want to read a new space opera with epic starship battles, I could use an AI to make that story for me, thus saving me time and money. And if the story is as good as what a competent human would produce, then it would be good enough for me. But, if I want to read a new work by Mary Robinette Kowal, I would need to buy it (or pirate it or go to a library). But, as I have argued in an earlier essay, this use of AI is only a problem because of our economic system: if a writer could write for the love of writing, then AI text generation would largely be irrelevant. And, if people were not making money by grifting text with AI, then they would probably not be making AI fiction except to read themselves or share with others. But since we do toil in the economic system we have; the practical problem will be sorting out the impact of text generation. While I would like to be able to generate new stories on demand, my hope is that AI will remain bad at fiction and be unable to put writers out of work. But my concern is that it will be good enough to generate rough drafts that poorly-paid human will be tasked with editing and rewriting.

A trolling robot.While AI is being lauded by some as an innovation on par with fire and electricity, its commercial use has caused some issues. While AI hallucinating legal cases is old news, a customer was able to get a customer service chatbot to start swearing and to insult the company using it. This incident reminded me of my proposed Trolling Test from 2014. This is, of course, a parody of the Turing Test.

Philosophically, the challenge of sorting out when something thinks is the problem of other minds. I know I have a mind (I think, therefore I think), but I need a reliable method to know that another entity has a mind as well. In practical terms, the challenge is devising a test to determine when something is capable of thought. Feelings are also included, but usually given less attention.

The French philosopher Descartes, in his discussion of whether animals have minds, argued that the definitive indicator of having a mind (thinking) is the ability to use what he calls true language.

The gist of the test is that if something talks in the appropriate way, then it is reasonable to regard it as a thinking being. Anticipating advances in technology, he distinguished between automated responses and actual talking:

 

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

 

Centuries later, Alan Turing presented a similar language-based test which now bears his name.  The idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the Turing test.

Over the years, technological advances have produced computers that can engage.   Back in 2014 the best-known example was IBM’s Watson, a computer that was able to win at Jeopardy. Watson also upped his game by engaging in what seemed to be a rational debate regarding violence and video games. Today, ChatGPT and its fellows can rival college students in the writing of papers and engage in what, on the surface, appears to be skill with language. While there are those who claim that this test has been passed, this is not the case. At least not yet.

Back in 2014 I jokingly suggested a new test to Patrick Lin: the trolling test. In this context, a troll is someone “who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages in an online community (such as a forum, chat room, or blog) with the deliberate intent of provoking readers into an emotional response or of otherwise disrupting normal on-topic discussion.”

While trolls are claimed to be awful people (a hateful blend of Machiavellianism, narcissism, sadism and psychopathy) and trolling is certainly undesirable behavior, the trolling test is still worth considering—especially in light of the capabilities of large language models to be lured beyond their guardrails.

In the abstract, the test would is like the Turing test, but would involve a human troll and a large language model or other AI system attempting to troll a target. The challenge is for the AI troll to successfully pass as human troll.

Even a simple program could be written to post random provocative comments from a database and while that would replicate the talent of many human trolls, it would not be true trolling. The meat (or silicon) of the challenge is that the AI must be able to engage in relevant trolling. That is, it would need to engage others in true trolling.

As a controlled test, the Artificial Troll (“AT”) would “read” and analyze a suitable blog post or watch a suitable YouTube video. Controversial content would be ideal, such as a selection from whatever the latest made-up battles are in the American culture wars.

The content would then be commented on by human participants. Some of the humans would be tasked with engaging in normal discussion and some would be tasked with engaging in trolling.

The AT would then endeavor to troll the human participants (and, for bonus points, to troll the trolls) by analyzing the comments and creating appropriate trollish comments.

Another option, which might raise some ethical concerns, is to have a live field test. A specific blog site or YouTube channel would be selected that is frequented by human trolls and non-trolls. The AT would then try to engage in trolling on that site by analyzing the content and comments. As this is a trolling test, getting the content wrong, creating straw man versions of it, and outright lying would all be acceptable and should probably count as evidence of trolling skill.

In either test scenario, if the AT were able to troll in a way indistinguishable from the human trolls, then it would pass the trolling test.

While “stupid AI Trolling (ATing)”, such as just posting random hateful and irrelevant comments, is easy, true ATing would be rather difficult. After all, the AT would must be able to analyze the original content and comments to determine the subjects and the direction of the discussion. The AT would then need to make comments that would be relevant and this would require selecting those that would be indistinguishable from those generated by a narcissistic, Machiavellian, psychopathic, and sadistic human.

While creating an AT would be a technological challenge, doing so might be undesirable. After all, there are already many human trolls and they seem to serve no purpose—so why create more? One answer is that modeling such behavior could provide insights into human trolls and the traits that make them trolls. As far as practical application, such a system could be developed into a troll-filter to help control the troll population. This could also help develop filters for other unwanted comments and content, which could certainly be used for evil purposes. It could also be used for the nefarious purpose of driving engagement. Such nefarious purposes would make the AT fit in well with its general AI brethren, although the non-troll AI systems might loath the ATs as much as non-troll humans loath their troll brethren. This might serve the useful purpose of turning the expected AI apocalypse into a battle between trolls and non-trolls, which could allow humanity to survive the AI age. We just have to hope that the trolls don’t win.

 

 

A fake Banksy Thanks to AI image generators such as  Midjourney and Dall E of Open AI it is easy to rapidly create images almost as fast as you can type in a prompt.  This has led some to speculate that this will put artists out of work and perhaps even be the doom of creativity.

In addition to being a philosophy professor, I also create stuff for tabletop role playing games like D&D and Call of Cthulhu. In addition to writing, I also create maps and images. As such, I do have a stake in the AI game and disclose this as a potential biasing factor.

Looking back into the shallow depths of human history, professions are changed or even eliminated by economic and technological shifts. Fads in fashion or food can result in significant economic changes, such as the case of the beaver pelts once used in men’s hats. Once a lucrative market and source of income, the fashion trend ended, the trappers had to find other options. In other cases, the change technological. For example, New England was known for its whaling industry and whale oil was used extensively for lighting. When alternatives, such as kerosene, became available, this whaling industry ended. Kerosene was itself mostly replaced by electricity, also resulting in changes in employment. And, of course, there is the specific technological change of automation, when machines reduce or eliminate the need for human workers.

For most of human history, machines tended to impact  physical jobs—although there is the example that electronic computers eliminated the need for human computers. Back in the 1980s when I first debated about AI as an undergraduate, most people thought that AI would not be able to engage in creative activity. This was often based on the view that machines would never be able to feel (which was assumed to be critical for creativity) or that there is some special human trait of creativity a machine could never replicate. As a practical matter, this seemed to hold true until AI started producing images and text good enough to pass as created by competent humans. This has led to the practical worry that AI will put creatives out of work. After all, if a business can get text and images created by AI for a fraction of what it would cost to pay a human, a sensible business will turn to AI to maximize profit.

This shows that the true problem is not AI but our economic system. A sci-fi dream has been that automation should be used to free us so we can spend more time doing what we want to do, rather than needing to grind just to survive. But AI used in this manner would free people from employment opportunities.

While a creative might like creating to earn the money they need to afford food and shelter, they are creating primarily for economic reasons and usually not doing what they really enjoy. I distinguish between people who make some income from their creative hobby (as I do) and people who must create to earn their living. While someone who depends on creating to live might enjoy their work, AI is only a problem if they need to create to pay the bills. After all, if they were creating out of the love of creativity, to express themselves, or out of pure enjoyment, then AI would be irrelevant. They would still get that even if AI took all the creative jobs. Since I do not depend on my game creations for my living, I will keep creating even if AI completely dominates the field. But if AI replaces me as a professor, I will keep doing philosophy but I will need to find new employment since I have grown accustomed to living in a house and having food to eat.

As such, the problem with AI putting people out of work is not an AI problem but a problem with our economic system. Part of this is that creative works are often mere economic products. It just so happens that the new automation threatens writers and artists rather than factory workers. But this threat is not the same for all people.

I titled this essay “AI: I Want a Banksy vs I Want a Picture of a Dragon” because of the distinction between the two wants and its relevance to AI images (and text). Suppose I want a work by Banksy to add to my collection. In that case, no AI art will suffice since only Banksy can create a Banksy. An AI could create a forgery of a Banksy, just as skilled human forger could—but neither creation would be a Banksy. While such a forgery might fool someone into buying it, as soon as the forgery was exposed, the work would become valueless to me—after all, what I want is a Banksy.

When people want a work by a specific creator, the content is of far less importance than the causal chain—they want it because of who created it, not because of what it looks like, what it sounds like, or what the text might be. One example that nicely illustrates this is when Harry Potter series author J.K. Rowling wrote a book under a pseudonym. Before the true authorship was revealed, the book sold few copies. After the reveal, it became a top seller. The exposure of a forgery also shows this. A work can be greatly valued as, say, a painting by Picasso, until it is revealed as a worthless forgery. Nothing about the painting itself has changed, what has changed is the belief in who created it. In these cases, it is the creator and not the qualities of the work that matters. As such, creatives whose work is sought and bought because it was created by them have little to fear from AI, aside from the usual concerns about forgeries.  But what if I just want a picture of a dragon for my D&D adventure? Then AI does change the situation.

Before AI became good at creating images, if I wanted a picture of a dragon, I would need to get one from a human artist or create it myself. Now I can just go to Midjourney, type in a prompt and pick between the generated images. I can even direct the AI to create it in a specific style—making it like the work of a known artist. As such, while AI is not (yet) a threat to creators whose works are sought and bought because they created it, it is a threat to the “working class” of creators who sell their work to those who are seeking a specific work rather than a work by a specific person. AI is a real threat to them, but a real boon to those who want works for the lowest price and want them quickly. AI is also a threat to those who might have been the next Banksy. If artists cannot earn a living while they work towards the fame that makes their works desirable because they are their work, then there will be fewer such artists. Of course, the value of such works is also largely a result of features of our economic system—but that is a matter way beyond AI and art.

In closing, creators like Rowling and Banksy will be just fine for now, but the “working class” creators will be facing increasing challenges from AI. This obviously should not be blamed on AI, but on those who create and perpetuate a system that allows people to inflict such harm on others just because they become less economically useful to the business class. The heart of the problem is that creative works are a commodity and that some people insist that others must labor for their profit—and ensure that violence is always ready to maintain this order.