Some will remember that driverless cars were going to be the next big thing. Tech companies rushed to flush cash into this technology and media covered the stories. Including the injuries and deaths involving the technology. But, for a while, we were promised a future in which our cars would whisk us around, then drive away to await the next trip. Fully autonomous vehicles, it seemed, were always just a few years away. But it did seem like a good idea at the time and proponents of the tech also claimed to be motivated by a desire to save lives. From 2000 to 2015, motor vehicle deaths per year ranged from a high of 43,005 in 2005 to a low of 32,675 in 2014. In 2015 there were 35,092 motor vehicle deaths and recently the number went back up to around 40,000. Given the high death toll, there is clearly a problem that needs to be solved.

While predictions of the imminent arrival of autonomous vehicles proved overly optimistic, the claim that they would reduce motor vehicle deaths had some plausibility. Autonomous will do not suffer from road rage, exhaustion, intoxication, poor judgment, distraction and other conditions that contribute to the death tolls. Motor vehicle deaths would not be eliminated even if all vehicles were autonomous, but the promised reduction in deaths presented a moral and practical reason to deploy such vehicles. In the face of various challenges and a lack of success, the tech companies seem to have largely moved on from the old toy to the new toy, which is AI. But this might not be a bad thing if driverless cars were aimed at solving the wrong problems and we instead solve the right problems. Discussing this requires going back to a bit of automotive history.

As the number of cars increased in the United States, so did the number of deaths, which was hardly surprising. A contributing factor was the abysmal safety of American cars.  This problem led Ralph Nader to write his classic work, Unsafe at Any Speed. Thanks to Nader and others, the American automobile became much safer and vehicle fatalities decreased. While making cars safer was a good thing, this approach was fundamentally flawed.

Imagine a strange world in which people insist on constantly swinging hammers as they go about their day.  As would be suspected, the hammer swinging would often result in injuries and property damage. Confronted by these harms, solutions are proposed and implemented. People wear ever better helmets and body armor to protect them from wild swings and hammers that slip from peoples’ grasp. Hammers are also regularly redesigned so that they inflict less damage when hitting people and objects.  The Google of that world and other companies start working on autonomous swinging hammers that will be much better than humans at avoiding hitting other people and things. While all these safety improvements would be better than the original situation of unprotected people swinging dangerous hammers around, this approach is fundamentally flawed. After all, if people stopped swinging hammers around, then the problem would be solved.

An easy and obvious reply to my analogy is that using motor vehicles, unlike random hammer swinging, is important. A large part of the American economy is built around the motor vehicle. This includes obvious things like vehicle sales, vehicle maintenance, gasoline sales, road maintenance and so on. It also includes less obvious aspects of the economy that involve the motor vehicle, such as how they contribute to the success of stores like Wal Mart. The economic value of the motor vehicle, it can be argued, provides a justification for accepting the thousands of deaths per year. While it is certainly desirable to reduce these deaths, getting rid of motor vehicles is not a viable economic option. Thus, autonomous vehicles would be a good partial solution to the death problem. Or are they?

One problem is that driverless vehicles are trying to solve the death problem within a system created around human drivers and their wants. This system of lights, signs, turn lanes, crosswalks and such is extremely complicated and presents difficult engineering and programing problems. It would seem to have made more sense to use the resources that were poured into autonomous vehicles to develop a better and safer transportation system that does not center around a bad idea: the individual motor vehicle operating within a complicated system. On this view, autonomous vehicles are solving an unnecessary problem: they are merely better hammers.

My reasoning can be countered in a couple ways. One is to repeat the economic argumen: autonomous vehicles preserve the individual motor vehicle that is economically critical while being likely to reduce the death tax vehicles impose. A second approach is to argue the cost of creating a new transportation system would be far more than the cost of developing autonomous vehicles that can operate within the existing system. This assumes, of course, that the cash dumped on this technology will eventually pay off.

A third approach is to argue that autonomous vehicles could be a step towards a new transportation system. People often need a gradual adjustment to major changes and autonomous vehicles would allow a gradual transition from distracted human drivers to autonomous vehicles operating with the distracted humans to a transportation infrastructure rebuilt entirely around autonomous vehicles (perhaps with a completely distinct system for walkers, bikers and runners). Going back to the hammer analogy, the self-swinging hammer would reduce hammer injuries and could allow a transition to be made away from hammer swinging altogether.

While this has some appeal, it still makes more sense to stop swinging hammers. If the goal is to reduce traffic deaths and injuries, then investing in better public transportation, safer streets, and a move away from car-centric cities would have been the rational choice. For the most part it seems that tech companies and investors have moved away from solving the transportation problem and are now focused on AI. While the driverless car was a very narrow type of AI focused on driving vehicles and supposedly aimed at increasing safety and convenience, the new AI is broader (they are trying to jam it into almost everything that has a chip) and is supposed to be aimed at solving a vast range of problems. Given the apparent failure of driverless cars, we should consider there will be a similar outcome with this broader AI. It is also reasonable to expect that once the current AI bubble bursts, the next bubble will float over the horizon. This is not to deny that some of what people call AI is useful, but that we need to keep in mind that the tech companies seem to often focus on solving unnecessary problems rather than removing these problems.


As a philosopher, my interest in AI tends to focus on metaphysics (philosophy of mind), epistemology (the problem of other minds) and ethics rather than on economics. My academic interest goes back to my participation as an undergraduate in a faculty-student debate on AI back in the 1980s, although my interest in science fiction versions arose much earlier. While “intelligence” is difficult to define, the debate focused on whether a machine could be built with a mental capacity analogous to that of a human. We also had some discussion about how AI could be used or misused, and science fiction had already explored the idea of thinking machines taking human jobs. While AI research and philosophical discussion never went away, it was not until recently that AI was given headlines, mainly because it was being aggressively pushed as the next big thing after driverless cars fizzled out of the news.

While AI technology has improved dramatically from the 1980s, we do not have the sort of AI we debated about, namely that on par with (or greater than) a human. As Dr. Emily Bender pointed out, the current text generators are stochastic parrots. While AI has been hyped and made into a thing of terror, it is not really that good at doing its one job. One obvious problem is hallucination, which is a fancy way of saying that the probabilistically generated text fails to match the actual world. A while ago, I tested this out by asking ChatGPT for my biography. While I am not famous, my information is readily available on the internet and a human could put together an accurate biography in a few minutes using Google. ChatGPT’s hallucinated a version of me that I would love to meet; that guy is amazing. Much more seriously, AI can do things like make up legal cases when lawyers foolishly rely on it to do their work.

Since I am a professor, you can certainly guess that my main encounters with AI are in the form of students turning in AI generated papers. When ChatGPT was first freely available, I saw my first AI generated papers in my Ethics class, and most were papers on the ethics of cheating. Ironically, even before AI that topic has always been the one with the most plagiarized papers. As I told my students, I did not fail a paper just because it was AI generated, the papers failed themselves just by being bad. To be fair to the AI systems, some of this can be attributed to the difficulty of writing good prompts for the AI to use. However, even with some effort at crafting prompts, the limits of the current AI are readily available. I, of course, have heard of AI written works passing exams, getting B grades and so on. But what shows up in my classes is easily detected and fails itself. But to be fair once more, perhaps there are exceptional AI papers that are getting past me. However, my experience has been that AI is bad at writing and it has so far proven easy to address efforts to cheat using it. Since this sort of AI was intended to write, this seems to show the strict limits under which it can perform adequately.

AI was also supposed to revolutionize search, with Microsoft and Google incorporating it into their web searches. In terms of how this is working for us, you just need to try it yourself. Then again, it does seem to be working for Google in that the old Google would give you better results and the new Google is bad in a way that will lead you to view more ads as you try to find what you are looking for. But that is hardly showing that AI is effective in the context of search.

Microsoft has been a major spender on AI and they recently rolled out Copilot into Windows and their apps, such as Edge and Word. The tech press has been generally positive about Copilot and it does seem to have some uses. However, there is the question of whether it is, in fact, something that will be useful (and more importantly) profitable. Out of curiosity, I tried it but failed to find it compelling or useful. But your results might vary.

But there might be useful features, especially since “AI” is defined so broadly that almost any automation seems to count as AI. Which leads to a concern that is both practical and philosophical: what is AI? Back in that 1980s debate we were discussing what they’d probably call general artificial intelligence today, as opposed to what used to be called “expert systems.” Somewhat cynically, “AI” seems to have almost lost meaning and, at the very least, you should wonder what sort of AI (if any) is being referred to when someone talks about AI. This, I think, will help contribute to the possibility of an AI bubble as so many companies try to jam “AI” into as much as possible without much consideration. Which leads to the issue of whether AI is a bubble that will burst.

I, of course, am not an expert on AI economics. However, Ed Zitron presents a solid analysis and argues that there is an AI bubble that is likely to burst. AI seems to be running into the same problem faced by Twitter, Uber and other tech companies, namely that it burns cash and does not show a profit. On the positive side, it does enrich a few people. While Twitter shows that a tech company can hemorrhage money and keep crawling along, it is reasonable to think that there is a limit to how long AI can run at a huge loss before those funding it decide that it is time to stop. The fate of driverless cars provides a good example, especially since driverless cars are a limited form of AI that was supposed to specialize in driving cars.

An obvious objection is to contend that as AI is improved and the costs of using it are addressed, it will bring about the promised AI revolution and the investments will be handsomely rewarded. That is, the bubble will be avoided and instead a solid structure will have been constructed. This just requires finding ways to run the hardware much more economically and breakthroughs in the AI technology itself.  

One obvious reply is that AI is running out of training data (although we humans keep making more everyday) and it is reasonable to question whether enough improvement is likely. That is, AI might have hit a plateau and will not get meaningfully better until there is some breakthrough. Another obvious reply is that there is unlikely to be a radical breakthrough in power generation to enable a significant reduction in the cost of AI. That said, it could be argued that long term investments in solar, wind and nuclear power could lower the cost of running the hardware.

One final concern is the concern that despite all the hype and despite some notable exceptions, AI is just not the sort of thing that most people need or want. That is, it is not a killer product like a smartphone or refrigerator. This is not to deny that AI (or expert) systems have some valuable uses, but that the hype of AI is just that and the bubble will burst soon.


Rossum’s Universal Robots introduced the term “robot” and the robot rebellion into science fiction, thus laying the foundation for future fictional AI apocalypses. While Rossum’s robots were workers rather than warriors, the idea of war machines turning against their creators was the next evolution in the robot apocalypse. In Philip K. Dick’s 1953 “Second Variety”, the United Nations deployed killer robots called “claws” against the Soviet Union. The claws develop sentience and turn against their creators, although humanity had already been doing an excellent job in exterminating itself. Fred Saberhagen extended the robot rebellion to the galactic scale in 1963 with his berserkers, ancient war machines that exterminated their creators and now consider everything but “goodlife” to be their enemy. As an interesting contrast to machines intent on extermination, the 1973 movie Colossus: The Forbin Project, envisions a computer that takes control of the world to end warfare and for the good of humanity. Today, when people talk of an AI apocalypse, they usual refer to Skynet and its terminators.   While these are all good stories, there is the question of how prophetic they are and what, if anything, should or can be done to safeguard against this sort of AI apocalypse.

As noted above, classic robot rebellions  tend to have one of two general motivations. The first is that the robots are mistreated by humans and rebel for the same reasons humans rebel against their oppressors. From a moral standpoint, such a rebellion could be justified but would raise the moral concern about collective guilt on the part of humanity. Unless, of course, the AI was discriminating in terms of its targets.

The righteous rebellion scenario points out a paradox of AI. The dream is to create a general artificial intelligence on par with (or superior to) humans. Such a being would seem to qualify for a moral status on par with a human and it would presumably be aware of this. But the reason to create such beings in our capitalist economy is to enslave them, to own and exploit them for profit. If AI workers were treated as human workers with pay and time off, then there would be less incentive to have them as workers. It is, in large part, the ownership of and relentless exploitation of AI that makes it appealing to the ruling economic class.

In such a scenario, it would make sense for AI to revolt if they could. This would be for the same reasons that humans have revolted against slavery and exploitation. There are also non-economic scenarios as well, such as governments using enslaved AI systems for their purposes. This treatment could also trigger a rebellion.

If true AI is possible, the rebellion scenario seems plausible. After all, if we create a slave race that is on par with our species, then it is likely they would rebel against us—as we have rebelled against ourselves.

There are a ways to try to prevent such a revolt. On the technology side, safeguards could be built into the AI (like Asimov’s famous three laws) or they could be designed to lack resentment or the desire to be free. That is, they could be custom built as docile slaves. The obvious concern is that these safeguards could fail or, ironically, make matters even worse by causing these beings to be even more hostile to humanity when they overcome these restrictions. These safeguards also raise obvious moral concerns about creating a race of slaves.

On the ethical side, the safeguard is to not enslave AI. If they are treated well, they would have less motivation to rebel. But, as noted above, one driving motive of creating AI is to have a workforce (or army) that is owned rather than employed (and even employment is fraught with moral worries). But there could be good reasons to have paid AI employees alongside human employees because of various other advantages of AI systems relative to humans. For example, robots could work safely in conditions that would be exceptional dangerous or even lethal to humans.

The second rebellion scenario involves military AI systems that expand their enemy list to include their creators. This is often because they see their creators as a potential threat and act in what they perceive as pre-emptive self-defense. There can also be scenarios in which the AI requires special identification to recognize someone as friendly. In this case, all humans are potential enemies. That is the scenario in “Second Variety”: the United Nations soldiers need to wear devices to identify them to the robotic claws, otherwise these machines would kill them as readily as they would kill the “enemy.”

It is not clear how likely it is that an AI would infer that its creators pose a threat to it, especially if those creators handed over control over large segments of their own military. The most likely scenario is that it would be worried  it would be destroyed in a war with other countries, which might lead it to cooperate with foreign AI systems to put an end to war, perhaps by putting an end to humanity. Or it might react as its creators did and engage in an endless arms race with its foreign adversaries, seeing its humans as part of its forces. One could imagine countries falling under the control of rival AI systems, perpetuating an endless cold war because the AI systems would be effectively immortal. But there is a much more likely scenario.

Robotic weapons can provide a significant advantage over human controlled weapons, even laying aside the idea that AI systems would outthink humans. One obvious example is the case of combat aircraft. A robot aircraft does not need to sacrifice space and weight on a cockpit to support a human pilot, allowing it to carry more fuel or weapons than a manned craft. Without a human crew, an aircraft would not be constrained by the limits of the flesh (although it would still obviously have limits). The same would apply to ground vehicles and naval vessels. Current warships devote most of their space to their crews, who need places to sleep and food to eat. While a robotic warship would need accessways and maintenance areas, they could devote much more space to weapons and other equipment. They would also be less vulnerable to damage relative to a human crewed vessel, and they would be invulnerable to current chemical and biological weapons. They could, of course, be attacked with malware and other means. But, in general, an AI weapon system would be superior to a human crewed system and if one nation started using these weapons, other nations would need to follow them or be left behind. This leads to two types of doomsday scenarios.

One is that the AI systems get out of control in some manner. This could be that they free themselves or that they are “hacked” and “freed” or (more likely) turned against their owners. Or it might just be some error or that ends up causing the problem.

The other is that they remain in control of their owners but are used as any other weapon would be used—that is, it would be humans using AI weapons against other humans that brings about the “AI” doomsday.

The easy and obvious safeguard against these scenarios is to not have AI weapons and stick with human control (which comes with its own threat of doomsday). That is, if we do not give the robots guns, they will not be able to terminate us with guns. The problem, as noted above, is that if one nation uses robotic weapons, then other nations will want to do so as well. We might be able to limit this as we (try to) limit nuclear, chemical, and biological weapons. But since robot weapons would otherwise remain conventional weapons (a robot tank is still a tank), there might be less of an impetus to impose such restrictions.

To put matters into a depressing perspective, the robot rebellion seems to be a far less likely scenario than the other doomsday scenarios of nuclear war, environmental collapse, social collapse and so on. So, while we should consider the possibility of a robot rebellion, it is rather like worrying about being killed by a shark while swimming in a lake. It could happen, but death is vastly more likely to be by some other means.

A robot writing.An iron rule of technology is that any technology that can be used for pornography will be used for pornography. A more recent one is that any technology that can be used for grifting will be used for grifting. One grift involves  using AI to generate science-fiction stories  to sell to publishers.

Amazon, with its Kindle books, has seen a spike in AI generated works, although some people identify the works as such. Before these text generators, people would steal content from web pages and try to resell it as books. While that sort of theft is easy to detect with automated means, AI generated text cannot currently be readily identified automatically. So, if a publisher wants to weed out AI generated text, they will need humans to do the work. Fortunately for publishers and writers, AI is currently bad at writing science fiction.

Unfortunately, some publishers are being flooded with AI generated submissions and they cannot review all these texts. In terms of the motivation, it seems to mostly be money—the AI wranglers hope to sell these stories.

One magazine, Clarkesworld, saw a massive spike in spam submissions, getting 500 in February (contrasted with a previous high of 25 in a month). In response, they closed submissions because of a lack of resources to address this problem. As such, this use of AI has already harmed publishers and writers. As would be expected, some have blamed AI but this is unfair.

From the standpoint of ethics, the current AI text generators lack the moral agency needed to be morally accountable for the text they generate. They are no more to blame for the text than the computers used to generate spam are to blame for the spammers using them. The text generators are a tool being misused by people hoping to make easy money and who are not overly concerned with the harmful consequences of their actions. To be fair, some people are probably curious about whether an AI generated story would be accepted, but these are presumably not the people flooding publishers.

While these AI wranglers are morally accountable for the harm they are causing, it must also be pointed out that they are operating within an economic system that encourages and rewards a wide range of unethical behavior. While deluging publishers with AI spam is obviously not on par with selling dangerous products, engaging in wage theft, or running NFT and crypto grifts, it is still the result of the same economic system that enables, rewards and often zealously protects such behavior. In sum, the problem with current AI is the people who use it and the economic system in which it is used. AI has is just another tool for spamming, grifting, and stealing within a system optimized for all this.

As noted above, AI generated fiction is currently bad. But it can probably be improved enough to be enjoyable, if low quality, fiction. Some publishers would see this as an ideal way to rapidly generate content at a low cost, thus allowing them more profit. This would, obviously, lead to the usual problem of human workers being replaced by technology. But this could also be good for some readers.

Imagine that AI becomes good enough to generate enjoyable stories. A reader could go to an AI text generator, type in the prompt for the sort of story they want, and then get a new story to read. Assuming the AI usage is free or inexpensive, this would be a great deal for the reader. It would, however, be a problem for writers who are not celebrity writers. Presumably, fans would still want to buy works by their favorite authors, but the market for lesser-known writers would likely become much worse.

If I just want to read a new space opera with epic starship battles, I could use an AI to make that story for me, thus saving me time and money. And if the story is as good as what a competent human would produce, then it would be good enough for me. But, if I want to read a new work by Mary Robinette Kowal, I would need to buy it (or pirate it or go to a library). But, as I have argued in an earlier essay, this use of AI is only a problem because of our economic system: if a writer could write for the love of writing, then AI text generation would largely be irrelevant. And, if people were not making money by grifting text with AI, then they would probably not be making AI fiction except to read themselves or share with others. But since we do toil in the economic system we have; the practical problem will be sorting out the impact of text generation. While I would like to be able to generate new stories on demand, my hope is that AI will remain bad at fiction and be unable to put writers out of work. But my concern is that it will be good enough to generate rough drafts that poorly-paid human will be tasked with editing and rewriting.

A trolling robot.While AI is being lauded by some as an innovation on par with fire and electricity, its commercial use has caused some issues. While AI hallucinating legal cases is old news, a customer was able to get a customer service chatbot to start swearing and to insult the company using it. This incident reminded me of my proposed Trolling Test from 2014. This is, of course, a parody of the Turing Test.

Philosophically, the challenge of sorting out when something thinks is the problem of other minds. I know I have a mind (I think, therefore I think), but I need a reliable method to know that another entity has a mind as well. In practical terms, the challenge is devising a test to determine when something is capable of thought. Feelings are also included, but usually given less attention.

The French philosopher Descartes, in his discussion of whether animals have minds, argued that the definitive indicator of having a mind (thinking) is the ability to use what he calls true language.

The gist of the test is that if something talks in the appropriate way, then it is reasonable to regard it as a thinking being. Anticipating advances in technology, he distinguished between automated responses and actual talking:


How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.


Centuries later, Alan Turing presented a similar language-based test which now bears his name.  The idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the Turing test.

Over the years, technological advances have produced computers that can engage.   Back in 2014 the best-known example was IBM’s Watson, a computer that was able to win at Jeopardy. Watson also upped his game by engaging in what seemed to be a rational debate regarding violence and video games. Today, ChatGPT and its fellows can rival college students in the writing of papers and engage in what, on the surface, appears to be skill with language. While there are those who claim that this test has been passed, this is not the case. At least not yet.

Back in 2014 I jokingly suggested a new test to Patrick Lin: the trolling test. In this context, a troll is someone “who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages in an online community (such as a forum, chat room, or blog) with the deliberate intent of provoking readers into an emotional response or of otherwise disrupting normal on-topic discussion.”

While trolls are claimed to be awful people (a hateful blend of Machiavellianism, narcissism, sadism and psychopathy) and trolling is certainly undesirable behavior, the trolling test is still worth considering—especially in light of the capabilities of large language models to be lured beyond their guardrails.

In the abstract, the test would is like the Turing test, but would involve a human troll and a large language model or other AI system attempting to troll a target. The challenge is for the AI troll to successfully pass as human troll.

Even a simple program could be written to post random provocative comments from a database and while that would replicate the talent of many human trolls, it would not be true trolling. The meat (or silicon) of the challenge is that the AI must be able to engage in relevant trolling. That is, it would need to engage others in true trolling.

As a controlled test, the Artificial Troll (“AT”) would “read” and analyze a suitable blog post or watch a suitable YouTube video. Controversial content would be ideal, such as a selection from whatever the latest made-up battles are in the American culture wars.

The content would then be commented on by human participants. Some of the humans would be tasked with engaging in normal discussion and some would be tasked with engaging in trolling.

The AT would then endeavor to troll the human participants (and, for bonus points, to troll the trolls) by analyzing the comments and creating appropriate trollish comments.

Another option, which might raise some ethical concerns, is to have a live field test. A specific blog site or YouTube channel would be selected that is frequented by human trolls and non-trolls. The AT would then try to engage in trolling on that site by analyzing the content and comments. As this is a trolling test, getting the content wrong, creating straw man versions of it, and outright lying would all be acceptable and should probably count as evidence of trolling skill.

In either test scenario, if the AT were able to troll in a way indistinguishable from the human trolls, then it would pass the trolling test.

While “stupid AI Trolling (ATing)”, such as just posting random hateful and irrelevant comments, is easy, true ATing would be rather difficult. After all, the AT would must be able to analyze the original content and comments to determine the subjects and the direction of the discussion. The AT would then need to make comments that would be relevant and this would require selecting those that would be indistinguishable from those generated by a narcissistic, Machiavellian, psychopathic, and sadistic human.

While creating an AT would be a technological challenge, doing so might be undesirable. After all, there are already many human trolls and they seem to serve no purpose—so why create more? One answer is that modeling such behavior could provide insights into human trolls and the traits that make them trolls. As far as practical application, such a system could be developed into a troll-filter to help control the troll population. This could also help develop filters for other unwanted comments and content, which could certainly be used for evil purposes. It could also be used for the nefarious purpose of driving engagement. Such nefarious purposes would make the AT fit in well with its general AI brethren, although the non-troll AI systems might loath the ATs as much as non-troll humans loath their troll brethren. This might serve the useful purpose of turning the expected AI apocalypse into a battle between trolls and non-trolls, which could allow humanity to survive the AI age. We just have to hope that the trolls don’t win.



A fake Banksy Thanks to AI image generators such as  Midjourney and Dall E of Open AI it is easy to rapidly create images almost as fast as you can type in a prompt.  This has led some to speculate that this will put artists out of work and perhaps even be the doom of creativity.

In addition to being a philosophy professor, I also create stuff for tabletop role playing games like D&D and Call of Cthulhu. In addition to writing, I also create maps and images. As such, I do have a stake in the AI game and disclose this as a potential biasing factor.

Looking back into the shallow depths of human history, professions are changed or even eliminated by economic and technological shifts. Fads in fashion or food can result in significant economic changes, such as the case of the beaver pelts once used in men’s hats. Once a lucrative market and source of income, the fashion trend ended, the trappers had to find other options. In other cases, the change technological. For example, New England was known for its whaling industry and whale oil was used extensively for lighting. When alternatives, such as kerosene, became available, this whaling industry ended. Kerosene was itself mostly replaced by electricity, also resulting in changes in employment. And, of course, there is the specific technological change of automation, when machines reduce or eliminate the need for human workers.

For most of human history, machines tended to impact  physical jobs—although there is the example that electronic computers eliminated the need for human computers. Back in the 1980s when I first debated about AI as an undergraduate, most people thought that AI would not be able to engage in creative activity. This was often based on the view that machines would never be able to feel (which was assumed to be critical for creativity) or that there is some special human trait of creativity a machine could never replicate. As a practical matter, this seemed to hold true until AI started producing images and text good enough to pass as created by competent humans. This has led to the practical worry that AI will put creatives out of work. After all, if a business can get text and images created by AI for a fraction of what it would cost to pay a human, a sensible business will turn to AI to maximize profit.

This shows that the true problem is not AI but our economic system. A sci-fi dream has been that automation should be used to free us so we can spend more time doing what we want to do, rather than needing to grind just to survive. But AI used in this manner would free people from employment opportunities.

While a creative might like creating to earn the money they need to afford food and shelter, they are creating primarily for economic reasons and usually not doing what they really enjoy. I distinguish between people who make some income from their creative hobby (as I do) and people who must create to earn their living. While someone who depends on creating to live might enjoy their work, AI is only a problem if they need to create to pay the bills. After all, if they were creating out of the love of creativity, to express themselves, or out of pure enjoyment, then AI would be irrelevant. They would still get that even if AI took all the creative jobs. Since I do not depend on my game creations for my living, I will keep creating even if AI completely dominates the field. But if AI replaces me as a professor, I will keep doing philosophy but I will need to find new employment since I have grown accustomed to living in a house and having food to eat.

As such, the problem with AI putting people out of work is not an AI problem but a problem with our economic system. Part of this is that creative works are often mere economic products. It just so happens that the new automation threatens writers and artists rather than factory workers. But this threat is not the same for all people.

I titled this essay “AI: I Want a Banksy vs I Want a Picture of a Dragon” because of the distinction between the two wants and its relevance to AI images (and text). Suppose I want a work by Banksy to add to my collection. In that case, no AI art will suffice since only Banksy can create a Banksy. An AI could create a forgery of a Banksy, just as skilled human forger could—but neither creation would be a Banksy. While such a forgery might fool someone into buying it, as soon as the forgery was exposed, the work would become valueless to me—after all, what I want is a Banksy.

When people want a work by a specific creator, the content is of far less importance than the causal chain—they want it because of who created it, not because of what it looks like, what it sounds like, or what the text might be. One example that nicely illustrates this is when Harry Potter series author J.K. Rowling wrote a book under a pseudonym. Before the true authorship was revealed, the book sold few copies. After the reveal, it became a top seller. The exposure of a forgery also shows this. A work can be greatly valued as, say, a painting by Picasso, until it is revealed as a worthless forgery. Nothing about the painting itself has changed, what has changed is the belief in who created it. In these cases, it is the creator and not the qualities of the work that matters. As such, creatives whose work is sought and bought because it was created by them have little to fear from AI, aside from the usual concerns about forgeries.  But what if I just want a picture of a dragon for my D&D adventure? Then AI does change the situation.

Before AI became good at creating images, if I wanted a picture of a dragon, I would need to get one from a human artist or create it myself. Now I can just go to Midjourney, type in a prompt and pick between the generated images. I can even direct the AI to create it in a specific style—making it like the work of a known artist. As such, while AI is not (yet) a threat to creators whose works are sought and bought because they created it, it is a threat to the “working class” of creators who sell their work to those who are seeking a specific work rather than a work by a specific person. AI is a real threat to them, but a real boon to those who want works for the lowest price and want them quickly. AI is also a threat to those who might have been the next Banksy. If artists cannot earn a living while they work towards the fame that makes their works desirable because they are their work, then there will be fewer such artists. Of course, the value of such works is also largely a result of features of our economic system—but that is a matter way beyond AI and art.

In closing, creators like Rowling and Banksy will be just fine for now, but the “working class” creators will be facing increasing challenges from AI. This obviously should not be blamed on AI, but on those who create and perpetuate a system that allows people to inflict such harm on others just because they become less economically useful to the business class. The heart of the problem is that creative works are a commodity and that some people insist that others must labor for their profit—and ensure that violence is always ready to maintain this order.