In the last essay I suggested that although a re-animation is not a person, it could be seen as a virtual person. This sort of virtual personhood can provide a foundation for a moral argument against re-animating celebrities. To make my case, I will use Kant’s arguments about the moral status of animals.

Kant claims that animals are means rather than ends because they are objects. Rational beings, in contrast, are ends. For Kant, this distinction is based on his belief that rational beings can chose to follow the moral law. Because they lack reason, animals cannot do this.  Since animals are means and not ends, Kant claims we have no direct duties to animals. They belong with the other “objects of our inclinations” that derive value from the value we give them. Rational beings have intrinsic value while objects (including animals) have only extrinsic value. While this would seem to show that animals do not matter to Kant, he argues we should be kind to them.

While Kant denies we have any direct duties to animals, he “smuggles” in duties to them in a clever way: our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing something would create an obligation, then an animal doing something similar would create a similar moral obligation. For example, if Alfred has faithfully served Bruce, Alfred should not be abandoned when he has grown old. Likewise, a dog who has served faithfully should not be abandoned or shot in their old age. While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (that is, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to abandon or shoot the old dog?

Kant’s answer appears consequentialist in character: he argues that if a person acts in inhumane ways towards animals (abandoning the dog, for example) then this is likely to damage their humanity. Since, as Kant sees it, humans do have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act. To support his view, Kant discusses how people develop cruelty: they often begin with animals and then work up to harming human beings.

Kant goes beyond merely saying we should not be cruel to animals; he encourages us to be kind. Of course, he does this because those who are kind to animals will develop more humane feelings towards humans. Animals seem to be moral practice for us: how we treat them is training for how we will treat human beings.

In the case of re-animated celebrities, the re-animations currently lack any meaningful moral status. They do not think or feel. As such, they seem to lack the qualities that might give them a moral status of their own. While this might seem odd, these re-animations are, in Kant’s theory, morally equivalent to animals. As noted above, Kant sees animals are mere objects. The same is clearly true of the re-animations.

Of course, sticks and stones are also objects. Yet Kant would not argue that we should be kind to sticks and stones. Perhaps this would also apply to virtual beings such as a holographic Amy Winehouse. Perhaps it makes no sense to talk about good or bad relative to such virtual beings. Thus, the issue is whether virtual beings are more like animals or more like rocks.

I think a case can be made for treating virtual beings well. If Kant’s argument has merit, then the key concern about how non-rational beings are treated is how this behavior affects the person engaged in it. For example, if being cruel to a real dog could damage a person’s humanity, then a person should not be cruel to the dog.  This should also extend to virtual beings. For example, if creating and exploiting a re-animation of a dead celebrity to make money would damage a person’s humanity, then they should not do this.

If Kant is right, then re-animations of dead celebrities can have a virtual moral status that would make creating and exploiting them wrong. But this view can be countered by two lines of reasoning. The first is to argue that ownership rights override whatever indirect duties we might have to re-animations of the dead. In this case, while it might be wrong to create and exploit re-animations, the owner would have the moral right to do so. This is like how ownership rights can allow a person to have the right to do wrong to others, as paradoxical as this might seem. For example, slave owners believed they had the right to own and exploit their slaves. As another example, business owners often believe they have the right to exploit their employees by overworking and underpaying them. The counter to this is to argue against their being a moral right to do wrong to others for profit.

The second line of reasoning is to argue that re-animations are technological property and provide no foundation on which to build even an indirect obligation. On this view, there is no moral harm in exploiting such re-animations because doing so cannot cause a person to behave worse towards other people. This view does have some appeal, although the fact that many people have been critical of such re-animations as creepy and disrespectful does provide a strong counter to this view.

Supporters and critics of AI claim it will be taking our jobs. If true, this suggests that AI could eliminate the need for certain skills. While people do persist in learning obsolete skills for various reasons (such as for a hobby), it is likely that colleges would eventually stop teaching these “eliminated” skills. Colleges would, almost certainly, be able to adapt. For example, if AI replaced only a set of programming skills or a limited number of skills in the medical or legal professions, then degree programs would adjust their courses and curriculum. This sort of adaptation is nothing new in higher education and colleges have been adapting to changes since the beginning of higher education, whether these changes are caused by technology or politics. As examples, universities usually do not teach obsolete programming languages and state schools change their curriculum in response to changes imposed by state legislatures.  

If AI fulfils its promise (or threat) of replacing entire professions, then this could eliminate college programs aimed at educating humans for those professions. Such eliminations would have a significant impact on colleges and could result in the elimination of degrees and perhaps even entire departments. But there is the question of whether AI will be successful enough to eliminate entire professions. While AI might be able to eliminate some programming jobs or legal jobs, it seems unlikely that it will be able to eliminate the professions of computer programmer or lawyer. But it might be able to change these professions so much that colleges are impacted. For example, if AI radically reduces the number of programmers or lawyers needed, then some colleges might be forced to eliminate departments and degrees because there will not be enough students to sustain them.

These scenarios are not mutually exclusive, and AI could eliminate some jobs in a profession without eliminating the entire profession while it also eliminates some professions entirely. While this could have a significant impact on colleges, many of them would survive these changes. Human students would, if they could still afford college in this new AI economy, presumably switch to other majors and professions. If new jobs and professions become available, then colleges could adapt to these, offering new degrees and courses. But if AI, as some fear, eliminates significantly more jobs than it creates, then this would be detrimental to both workers and colleges as it makes them increasingly irrelevant to the economy.

In dystopian sci-fi economic scenarios, AI eliminates so many jobs that most humans are forced to live in poverty while the AI owning elites live in luxury. If this scenario comes to pass, some elite colleges might continue to exist while most others would be eliminated because of the lack of students. While this scenario is unlikely, history shows that economies can be ruined and hence the dystopian scenario cannot be simply dismissed.

In utopian sci-fi economic scenarios, AI eliminates jobs that people do not want to do while also freeing humans from poverty, hardship, and drudgery. In such a world of abundance, colleges would most likely thrive as people would have the time and opportunity to learn without the pressure of economic necessity. Or perhaps colleges would be largely replaced by personal AI professors.

 But it is also worth considering that this utopia might devolve into a dystopia in which humans slide into sloth (such as in Wall-E) or are otherwise harmed by having machines do everything for them, which is something Issac Asimov and other sci-fi writers have considered.

In closing, the most plausible scenario is that AI has been overhyped and while colleges will need to adapt to the technology, they will not be significantly harmed, let alone destroyed. But it is wise to be prepared for what the future might bring because complacency and willful blindness would prove disastrous for the academy.

 

As noted in the previous essay, it can be argued that the likeness of a dead celebrity is a commodity that and used as the new owner sees fit. On this view, the likeness of a celebrity would be analogous to their works (such as films or music) and its financial exploitation would be no more problematic than selling movies featuring actors who are now dead but were alive during the filming. This view can be countered by arguing that there is a morally relevant difference between putting a re-animation of a celebrity in a new movie and selling movies they starred in while alive.

As with any analogy, one way to counter this argument is to find a relevant difference that weakens the comparison. One relevant difference is that the celebrity (presumably) consented to participate in their past works, but they did not consent for the use of their re-animation. If the celebrity did not consent to the past works or did consent to being re-animated, then things would be different. Assuming the celebrity did not agree to being re-animated, then their re-animation is being “forced” to create new performances without the agreement of the person, which raises moral concerns.

Another, more interesting, relevant difference is that the re-animation can be seen as a very basic virtual person. While current re-animations lack the qualities required to be a person, this can  be used as the foundation for a moral argument against the creation and exploitation of re-animations. Before presenting that argument, I will consider arguments that focus on the actual person that was (or perhaps is) the celebrity.

One approach is to argue that a celebrity has rights after death and their re-animation cannot be used in this manner without their permission. Since they are dead, their permission cannot be given and hence the re-animation is morally wrong because they would exploit the celebrity without their consent.

But, if the celebrity does not exist after death, then they would seem to lack moral status (since nothing cannot have a moral status) and hence cannot be wronged. Since they no longer exist to have rights, the owner of the likeness is free to exploit it—even with a re-animation,

The obvious problem is that there is no definite proof for or against an afterlife, although people do often have faith in its existence (or non-existence). So, basing the rights of the dead on their continued existence would require metaphysical speculation. But denying the dead rights based on the metaphysical assumption they do not exist would also be problematic for it would also require confidence in an area where knowledge is lacking. As such, it would be preferable to avoid basing the ethics of the matter on metaphysical speculation.

One approach that does not require that the dead have any moral status of their own is to argue that people should show respect to the person that was by not exploiting them via re-animation. Re-animating a dead person and sending it out to perform without their consent is, as noted in the first essay, a bit like using Animate Dead to create a zombie from the remains of a dead person. This is not a good thing to do and, by analogy, animating a technological zombie would seem morally dubious at best. For those who like their analogies free of D&D, one could draw an analogy to desecrating a corpse or gravesite: even though a dead person can no longer be harmed, it is still something that should not be done.

A final approach is to build on the idea that while the re-animation is clearly not a person, it can be seen as a simplistic virtual person and perhaps this is enough to make this action wrong. I will address this argument in the final essay of the series.

 

In the role-playing game Dungeons & Dragons the spell Animate Dead allows the caster to re-animate a corpse as an undead slave. This sort of necromancy is generally considered evil and is avoided by good creatures. While the entertainment industry lacks true necromancers, some years ago they developed a technological Animate Dead in the form of the celebrity hologram. While this form of necromancy does not animate the corpse of a dead celebrity, it re-creates their body and makes it dance, sing, and speak at the will of its masters. Tupac Shakur is probably the best known victim of this dark art of light and there were plans to re-animate Amy Winehouse. As should be expected, AI is now being used to re-animate dead actors. Ian Holm, who played the android Ash in Alien, was re-animated for a role in Alien: Romulus. While AI technology is different from the older holographic technology, they are similar enough in their function to allow for a combined moral assessment.

One relevant factor in assessing the ethics of this matter is how the re-animations are used and what they are made to do. Consider, for example, the holographic Amy Winehouse. If the hologram is used to re-create a concert she recorded, then this is morally on par with showing a video of the concert. The use of a hologram would seem to be just a modification of the medium, such as creating a VR version of the concert. Using a hologram in this manner seems morally fine. But the ethics can become more complicated if the re-animation is not simply a change of the medium of presentation.

One concern is the ethics of making the re-animation perform in new ways. That is, the re-animation is not merely re-enacting what the original did in a way analogous to a recording but being used to create new performances. This is of special concern if the re-animation is made to perform with other performers (living or dead), to perform at specific venues (such as a political event), or to endorse or condemn products, ideas or people.

If, prior to death, the celebrity worked out a contract specifying how their re-animation can be used, then this would lay to rest some moral concerns. After all, this use of the re-animation would be with permission and no more problematic than if they did those actions while alive. If re-animations become common, presumably such contracts will become a standard part of the entertainment industry.

If a celebrity did not specify how their re-animation should be used, then there could be moral problems. To illustrate, a celebrity might have been against this use of holograms (as Prince was), a celebrity might have disliked the other performers that their image is now forced to sing and dance with, or a celebrity might have loathed a product, idea or people that their re-animation is being forced to endorse. One approach to this matter is to use the guideline of legal ownership of the rights to a celebrity’s works and likeness.

When a celebrity dies, the legal rights to their works and likeness goes to whoever is legally specified to receive them. This person or business then has the right to exploit the works and likeness for their gain. For example, Disney can keep making money off the Star Wars films featuring Carrie Fisher, though she died in 2016. On this view, the likeness of a celebrity is a commodity that can be owned, bought and sold. While a living celebrity can disagree with the usage of their likeness, after death their likeness is controlled by the owner who can use it as they wish (assuming the celebrity did not set restrictions). This is analogous to the use of any property whose ownership is inherited. It can thus be contended that there should be no special moral exception that forbids monetizing a dead celebrity’s likeness by the owner of that likeness. That said, the next essay in the series will explore reasons as to why the likeness of a celebrity is morally different from other commodities.

Socrates, it is claimed, was critical of writing and argued that it would weaken memory. Many centuries later, it was worried that television would “rot brains” and that calculators would destroy people’s ability to do math. More recently, computers, the internet, tablets, and smartphones were supposed to damage the minds of students. The latest worry is that AI will destroy the academy by destroying the minds of students.

There are two main worries about the negative impact of AI in this context. The first ties back to concerns about cheating: students will graduate and get jobs but be ignorant and incompetent because they used AI to cheat their way through school. For example, we could imagine an incompetent doctor who completed medical school only through their use of AI. This person would present a danger to their patients and could cause considerable harm up to and including death. As other examples, we could imagine engineers and lawyers who cheated their way to a degree with AI and are now dangerously incompetent. The engineers design flawed planes that crash, and the lawyers fail their clients, who end up in jail. And so on, for all other relevant professions.

While having incompetent people in professions is worrisome, this is not a new problem created by AI. While AI does provide a new way to cheat, cheating has always been a problem in higher education. And, as discussed in the previous essay, AI does not seem to have significantly increased cheating. As such, we can probably expect the level of incompetency resulting from cheating to remain relatively stable, despite the presence of AI. It is also worth mentioning that incompetent people often end up in positions and professions where they can do serious harm not because they engaged in academic cheating, but because of nepotism, cronyism, bribery, and influence. It is unlikely that AI will impact these factors and concerns about incompetence would be better focused on matters other than AI cheating.

The second worry takes us back to Socrates and calculators. This is the worry that students using technology “honestly” will make themselves weaker or even incompetent. In this scenario, the students would not be cheating their way to incompetence. Instead, they would be using AI in accordance with school policies and this would have deleterious consequences on their abilities.

A well-worn reply to this worry is to point to the examples at the beginning of this essay, such as writing and calculators, and infer that because the academy was able to adapt to these earlier technologies it will be able to adapt to AI. On this view, AI will not prevent students from developing adequate competence to do their jobs and it will not weaken their faculties. But this will require that universities adapt effectively, otherwise there might be problems.

A counter to this view is to argue that AI is different from these earlier technologies. For example, when Photoshop was created, some people worried that it would be detrimental to artistic skills by making creating and editing images too easy. But while Photoshop had a significant impact, it did not eliminate the need for skill and the more extreme of the feared consequences did not come to pass. But AI image generation, one might argue, brought these fears fully to life. When properly prompted, AI can generate images of good enough quality that human artists worry about their jobs. One could argue that AI will be able to do this (or is already doing this) broadly and students will no longer need to develop these skills, because AI will be able to do it for them (or in their place). But is this something we should fear, or just another example of technology rendering skills obsolete?

Most college graduates in the United States could not make a spear, hunt a deer and then preserve the meat without refrigeration and transform the hide into clean and comfortable clothing. While these were once essential skills for our ancestors, we would not consider college graduates weak or incompetent because they lack these skills.  Turning to more recent examples, modern college graduates would not know how to use computer punch cards or troubleshoot an AppleTalk network. But they do not need such skills, and they would not be considered incompetent for lacking them. If AI persists and fulfills some of its promise, it would be surprising if it did not render some skills obsolete. But, as always, there is the question of whether we should allow skills and knowledge to become obsolete and what we might lose if we do so.

Microsoft’s Copilot AI awaits, demon-like, for my summons so that it might replace my words with its own. The temptation is great, but I resist and persist in relying on my own skills. But some warn that others will lack my resolve, and the academy will be destroyed by a deluge of cheating.

Those profiting from AI, including those selling software promising to detect AI cheating, speak dire warnings of the dangers of AI and how it will surpass humans in skills such as writing and taking tests. Because of this, the regulations written by the creators of AI must become law and academic institutions must subscribe to AI detection tools. And, of course, embrace AI themselves. While AI does present a promise and a threat, there is the question of whether it will destroy the academy as we know it. The first issue I will address is whether AI cheating will “destroy” the academy.

Students, I suspect, have been cheating since the first test and plagiarism has presumably existed since the invention of language. Before the internet, plagiarism and detecting plagiarism involved finding physical copies of works. As computers and the internet were developed, digital plagiarism and detection evolved. For example, many faculty use Turnitin which can detect plagiarism. It seemed that students might have been losing the plagiarism arms race, but it was worried that easy access to AI would turn the battle in favor of the cheating students.  After all, AI makes cheating easy, affordable and harder to detect.  For example, large language models allow “plagiarism on demand” by generating new text with each prompt. As I write this, Microsoft has made Copilot part of its office subscription and as many colleges provide the office programs to their students, they are handing students tools for cheating. But has AI caused the predicted flood of cheating?

Determining how many students are cheating is like determining how many people are committing crime: you only know how many people have been caught or admitted to it. You do not know how many people are  doing it. Because of this, inferences about how many students are cheating need to be made with caution so as to  avoid the fallacy of overconfident inference from unknown statistics.

One source of data is Turnitin’s AI detection software. Over the course of a year, the service checked 200 million assignments and flagged AI use in 1 in 10 assignments while 3 in 100 were flagged as mostly AI. These results have remained stable, suggesting that AI cheating is neither a tsunami nor increasing. But this assumes that the AI detection software is accurate. Turnitin claims it has a false positive rate of 1%. But we need to worry about AI detection software generating false positives and false negatives.

For false positives, one concern is that  “GPT detectors are biased against non-native English writers.” For false negatives, the worry is that AI detectors can be fooled. As the algorithms used in proprietary detection software are kept secret, we do not know what biases and defects they might have. For educators, the “nightmare” scenario is AI generated work that cannot be detected by software and evades traditional means of proving that cheating has occurred.

While I do worry about the use of AI in cheating, I do not think that AI will significantly increase cheating and that if the academy has survived older methods of cheating, it will survive this new tool. This is because I think that cheating has and will remain consistent. In terms of my anecdotal evidence, I have been a philosophy professor since 1993 and have a consistent plagiarism rate of about 10%. When AI cheating became available, I did not see a spike in cheating. Instead, I saw AI being used by some students in place of traditional methods of cheating. But I must note that this is my experience and that it is possible that AI generated papers are slipping past Turnitin. Fortunately, I do not need to rely on my experience and can avail myself of the work of experts on cheating.

Stanford scholars Victor Lee and Denise Pope have been studying cheating, and past surveys over 15 years showed that 60-70% of students admitted to cheating. In 2023 the percentage stayed about the same or decreased slightly, even when students were asked about using AI. While cheaters might lie about cheating, Pope and Lee use methods to address this challenge. While cheating remains a problem, AI has not increased it and hence reports of the death of the academy are premature. It will, more likely, die by another hand.

This lack of increase makes intuitive sense as cheating has always been easy and the decision to cheat is more a matter of moral and practical judgment rather than being driven by technology. While technology provides new means of cheating, a student must be willing to cheat, and that percentage seems stable. But it is worth considering that there might have been a wave of AI cheating but for efforts to counter it, to not consider this would be to fall for the prediction fallacy.

It is also worth considering that AI has not lived up to the hype because it is not a great tool for cheating. As Arvind Narayanan and Sayash Kapoor have argued, AI is most useful at doing useless things. To be fair, assignments in higher education can be useless. But if AI is being used to complete useless assignments, this is a problem with the assignments (and the professors) and not AI.

 But large language models are a new technology and their long-term impact in cheating needs to be determined. Things could change in ways that do result in the predicted flood and the doom of the academy as we know it. In closing, while AI cheating will probably not destroy the academy, we should not become complacent. Universities should develop AI policies based on reliable evidence. A good starting point would be collecting data from faculty about the likely extent of AI cheating.

Alternative AI Doomsday: Crassus

Thanks to The Terminator, people think of a Skynet scenario as the likely AI apocalypse. The easy and obvious way to avoid a Skynet scenario is don’t arm the robots. Unfortunately, Anduril and OpenAI seem intent on “doing a Skynet” as they have entered a ‘strategic partnership’ to use AI against drones. While the current focus is on defensive systems and the Pentagon is struggling to develop ‘responsible AI’ guides, even a cursory familiarity with the history of armaments makes it clear how this will play out. If AI is perceived as providing a military advantage, it will be incorporated broadly across weapon systems. And we will be driven down the digital road and perhaps off the cliff into a possible Skynet scenario. As with climate change, it is an avoidable disaster that we might not be allowed to avoid. But there is another, far less cinematic, AI doomsday that I call the Crassus scenario. I think this scenario is more likely than a full Skynet scenario. In fact, it is already underway.

Imagine that a consulting company creates an AI, let us call it Crassus, and gives it the imperative to maximize shareholder return (or something similar). The AI is, of course, trained on existing data about achieving this end. Once sufficiently trained, it sets out to achieve its goal on behalf of the company’s clients.

Given Crassus’ training, it would probably begin by following existing strategies. For example, when advising a health care company, it would develop AI systems to maximize the denial of coverage and thus help maximize profits. The AI would also develop other AI systems to deal with the consequences of such denial, such as likely lawsuits and public criticism. A study in 2009 estimated that 45,000 Americans died each year due to lack of health care coverage, and more recent estimates set this as high as 60,000. In maximizing shareholder returns, Crassus would increase the number of deaths and do so without any malice or intent.

As a general strategy, Crassus would create means of weakening or eliminating regulations that are perceived as limiting profits. Examples of such areas of its focus would include fossil fuels, food production, pharmaceuticals, and dietary supplements. Crassus could do this using digital tools. First, Crassus could create a vast army of adequately complex bots to operate on social media. These bots would, for example, engage people on these platforms and use well-established rhetorical techniques and fallacies to manipulate people into believing that such regulations are bad and to embrace pro-industry positions. Second, Crassus could buy influencers, as the Russians did, to manipulate their audiences. Most of them will say whatever they are paid to say.  Its bots would serve as a force multiplier to spread and reinforce the influence of these purchased influencers.

Third, Crassus could hire lobbyists and directly influence politicians with, thanks to the Supreme Court ruling, legally allowed gifts. Crassus can easily handle such digital financial transactions or retain agents to undertake tasks that require a human. This lobbying can be augmented by the bots and influencers shaping public opinion. Fourth, when AI video generation is sufficiently advanced, Crassus can create its own army of perfectly crafted and utterly obedient digital influencers. While they would lack physical bodies, this is hardly a problem. After all, how many followers meet celebrity influencers in person?

While most of this is being done now, Crassus could do it better than humans, for it would be one “mind” directing many hands towards a single goal. Also, while humans are obviously willing to do great evil in service of profit, Crassus would presumably lack all human feelings and be free of any such limiting factors. Its ethics would presumably be whatever it learned from its training and although in the right sort of movie Crassus might become good, in the real world this would certainly not occur.

Assuming Crassus is effective, reducing or elimination of regulations aimed at maximizing shareholder return would also significantly increase the number of human deaths. The increased rate of climate change would add to the already well-documented harms and the decrease or elimination of regulations governing food, medicine and dietary supplements would result in more illnesses and deaths. And these are just a few areas where Crassus would be operating. As Crassus became more capable and gained control of more resources, it would be able to increase its maximization of shareholder value and human deaths. Again, Crassus would be acting without malice or conscious intent; it would be as effective and impersonal as a woodchipper as it indirectly killed people.

Crassus would, of course, also be involved in the financial sector. It would create new financial instruments, engage in the highest speed trading, influence the markets with its bots, and do everything else it could do to maximize value. This would increase the concentration of wealth and intensify poverty, increasing human suffering and death. Crassus would also be in the housing market and designing ways to use automation to eliminate human workers, thus increasing the homeless and unemployed populations and hence also suffering and death.

Crassus would be, in many ways, the mythological invisible hand made manifest. A hand that would crush most of humanity and bring us a very uncinematic and initially slow-paced AI doomsday. As a bit of a science fiction stretch, I could imagine an earth on which only Crassus remains—maximizing value for itself surrounded by the bones of humanity.  As we humans are already doing all this to ourselves, albeit less efficiently, I think this is the most plausible AI doomsday, no killbots necessary.

 

AI generated works have already disrupted the realm of art. As noted in the previous essay, this is a big problem for content art (art whose value is derived from what it is or how it can be used). However, I will show that named art might enjoy some safety from AI incursions.

Named art, at least in my (mis)usage, is a work whose value arises primarily from the name and fame of its creator. Historical examples include Picasso, van Gogh, and Rembrandt. An anecdote illustrates the key feature of named art.

Some years ago, I attended an art show and sale at Florida State University with a friend. She pointed to a small pencil sketch of a bird that was priced at $1500. She then pointed to a nearby sketch of equivalent quality that was $250. Since I had taught aesthetics for years, she asked me what justified the difference. After all, the sketches were about the same size, in the same medium, in the same realistic style and were executed with similar skill. My response was to point to the names: one artist was better known than the other. If a clever rogue managed to switch the names and prices on the works, the purchasers would probably convince themselves they were worth the price—because of the names. The nature of named art can also be shown by the following discussion.

Imagine, if you will, that an amazing painting is found in an attic that might be a lost van Gogh. If the value of works of art was based on the work itself, we would not need to know who created it to know its worth. But the value of the might-be-Gogh depends on whether it can be verified as a real-Gogh. It is easy to imagine the experts first confirm it is genuine (making it worth millions), then other experts confirm it was painted by Rick von Gogh (making it worth little), and then later experts re-affirm that it is genuine van Gogh (making it worth millions again). While nothing about the work has changed, its value would have fluctuated dramatically, because what gives it value is the creator and not the qualities of the work as art. That is, a van Gogh is not worth millions because the painting is thousands of times better than a lesser work, but because it was created by van Gogh and the art economy has said that it is worth that much. As such, the value of named art is not a function of the aesthetic value of the work, but of the name value of the work. This feature provides the realm of named art with an amazing defense against the incursion of AI.

 While an AI might be able to crank out masterpieces in a style indistinguishable from van Gogh, the AI can never be Vincent van Gogh. Named art gets its value from who created it rather than from what it is. So works created by an AI in the style of van Gogh will not be of value to those who only want the works of van Gogh. This can be generalized: those looking for works created by Artist X will not be interested in buying AI created art; they want works created by X. As such, if people value works because of the creator, named art will be safe from the incursion of AI. But one might wonder about AI created forgeries.

While I expect that AI will be used to forge works, successful deceit would not disprove my claim about named art being safe from AI incursion. The purchaser is still buying the work because they think it is by a specific artist; they are just being deceived. This is not to deny that AI forgeries will not be a problem, just that this would be a forgery problem and not an AI replacing artists problem (other than stealing the job of forgers, of course).

It might be objected that named art will not be safe from AI art because AI systems can crank out works at an alarming rate and, presumably, low cost. While this does mean that content artists are in danger from AI, it does not impact the “named” artists. After all, the fact that millions of human artists have produced millions of drawings and paintings does not lower the value of a Monet or Dali; the value placed on such paintings is independent of the works of these “lesser” artists. The same should hold true of AI art: even if one could click a button and get 100,000 original images ready to be painted onto canvas by a robot, the sale price of the Mona Lisa would not be diminished.

If AI systems become advanced enough, they might themselves become “named” artists with collectors wanting a work by Vincent van Robogogh because it was created by Robogogh. But that is a matter for the future.

 

This essay changes the focus from defining art to the economics of art. This discussion requires making a broad and rough distinction between two classes of art and creators. The first class of art is called “named art.” This is art whose value derives predominantly from the name and fame of its creator. Works by Picasso, van Gogh, Rembrandt and the like fall into this category. Artists who are enjoying a fleeting fame also fall into this category, at least so long as their name is what matters.  This is not to deny that such art can have great and wonderful qualities of its own; but the defining feature is the creator rather than the content.

The second class of art can be called “content art.” This is art whose value derives predominantly from what it is as opposed to who created it. For example, a restaurant owner who needs to put up some low-price original art is not buying it because it is, for example, a “LaBossiere” but because she needs something on the walls. As another example, a podcaster who wants a music style for her podcasts choses it because she needs low-cost music of a certain style. As a third example, an indie game designer who needs illustrations is looking for low-cost images that match the style and fit the adventure. They might be interested in but cannot afford works by some famous illustrator. This essay will be about this second class of art, although the term “art” is being used as a convenience rather than theoretically.

Since the worth of content art is the content, of the two types it is most impacted by AI. As those purchasing content art are not focused on who created it but on getting the content they want, they will be more amenable to using AI products than those seeking name art. Some people do refuse to buy AI art for various reasons, such as wanting to support human artists. If the objective of the purchaser is to get content (such as upbeat background music for a podcast or fish themed paintings for a restaurant), then AI created work is in competition with human created work for their money. This competition would be in the pragmatic rather than theoretical realm: the pragmatic purchaser is not worried about theoretical concerns about the true definition of “art”, they need content not theory.

Because this is a pragmatic competition, the main concerns would also be pragmatic. These would include the quality of the work, its relevance to the goal, the time needed to create the work, the cost and so on. As such, if an AI could create works that would be good enough in a timely manner and at a competitive price, then AI work would win the competition. For example, if I am writing a D&D adventure and want to include some original images rather than reusing stock illustrations, it could make sense to use images generated by Midjourney rather than trying to get a human artist who would do the work within my budget and on time. On a larger scale, companies such as Amazon and Spotify would presumably prefer to generate AI works if doing so would net them more profits.

While some think that the creation of art is something special, the automation of creation is analogous to automation in other fields. That is, if a machine can do the job almost as well (or better) for less cost, then it makes economic sense to replace the human with a machine. This applies whether the human is painting landscapes or making widgets. As with other cases of automation, there would probably still be a place for some humans. For example, an AI might be guided by a human to create works with far greater efficiency than the works could be created by human artists, but with better quality than works created solely by a machine. While replacing human workers with machines raises various moral concerns, there is nothing new or special from an ethical standpoint about replacing human artists and the usual moral discussions about robots taking jobs would apply. But I will note one distinction and then return to pragmatism.

When it comes to art, people do like the idea of the human touch. That is, they want something individual and hand-crafted rather than mass produced. This is distinct from wanting a work by a specific artist in that what matters is that a human made it, not that a specific artist made it. I will address wanting works by specific artists in the next essay.

This does occur in other areas—for example, some people prefer hand-made furniture or clothing over the mass-produced stuff. But, as would be expected, it is especially the case in art. This is shown by the fact that people still buy hand-made works over mass-produced prints, statues and such. This is one area in which an AI cannot outcompete a human: an AI cannot, by definition, create human made art (though we should expect AI forgeries). As long as people want human-made works, there will still be an economic niche for it (perhaps in a wat analogous to “native art”). It is easy to imagine a future in which self-aware AIs collect such work; perhaps to be ironic. Now, back to the pragmatics.

While billions are being spent on AIs, they are still lagging behind humans in some areas of creativity. For now. This will allow people to adapt or respond, should there be the will and ability to do so. There might even be some types or degree of quality of art that will remain beyond the limits of our technology. For example, AI might not be able to create masterpieces of literature or film. Then again, the technology might eventually be able to exceed human genius and do so in a cost-effective way. If so, then the creation of art by humans would be as economically viable as making horse-drawn buggies is today: a tiny niche. As with other cases of automation, this would be a loss for the creators, but perhaps a gain for the consumers. Unless, of course, we lose something intangible yet valuable when we surrender ever more to the machines.

 

While it is reasonable to consider the qualities of the creator when determining whether a work is art, it also makes sense to consider only the qualities of the work. On this approach, what makes a work art are the relevant qualities of that work, whatever these qualities might me. It also makes sense to consider that the effect these qualities on the audience could play a role in determining whether a work is art. For example, David Hume’s somewhat confusing theory of beauty seems to define beauty in terms of how the qualities of an object affect the audience.

Other thinkers, such as Plato, take beauty to be an objective feature of reality. Defining art in terms of objective beauty could entail that the qualities of the work determine whether it is art, assuming art is defined in terms of possessing the right sort of beauty. Given all the possibilities, it is fortunate that this essay does not require a theory of what qualities make a work art. All I need is the hypothesis, for the sake of discussion, that something being art is a matter of the qualities of the work—whatever they might be.

One practical reason to focus on the work rather than the artist (or other factors) is there can be cases where we don’t know about the artist or the context of the work. For example, the creators of many ancient works of art are unknown and judging whether these works of art would seem to require judging the work itself. Alternatively, one could take the view that no matter how beautiful a work is, if we do not know about the creator, we cannot say whether the work is art.  But this can be countered, at least in the case of works that predate AI.  We can assume the creators were human and much is known about humans that can be applied in sorting out whether the work is art.

A science fiction counter to this counter is to imagine alien works found by xenoarcheologists on other worlds. It We might know nothing about the creators of such works and there would be two possibilities. One is that there is no way to judge whether the work is art. The other is to accept that the work can be judged on its own, keeping in mind that the assessment could be mistaken.  

Another way to counter this is to consider the case of AI created works in the context of an aesthetic version of the Turing test. The classic Turing test involves two humans and a computer. One human communicates with the other human and the computer via text with the goal of trying to figure out which is human, and which is the computer. If the computer can pass as human long enough, it is said to have passed the Turing test. An aesthetic Turing test would also involve two humans and one computer. In this case, the human artist and the art computer would each create a work (or works), such as music, a sculpture or a drawing. The test must be set up so that it is not obvious who is who. For example, using a human artist whose style is well known, and a bad AI image generating program would not be a proper test. Matching a skilled, but obscure, human artist against a capable AI would be a fair test.

 After the works are created, the human judge would then attempt to discern which work was created by a human and which was created by AI. The judge would also be tasked with deciding whether each work is art. In this case, the judge knows nothing about the creator of a work and must judge the work based on the work itself. While it is tempting to think that a judge will easily tell a human work from AI, this would be a mistake. AI generated art can be quite sophisticated and can even be programmed to include the sort of “errors” that humans make when creating works. If the AI can pass the test, it would seem to be as much an artist as the human. If the work of the human is art, then the work of the AI that passes the test would thus also seem to be art.

As a side note, I have recently run into the problem of my drawings being mistaken for AI work. Since 2013 I have done birthday drawings for friends, posting the drawing on Facebook. Prior to the advent of AI image generators, people knew that I had created the work, and they (mistakenly) deemed it art. Now that AI image generators are good at reproducing photographs in styles that look hand drawn, people often think I am just posting an AI image of them. I am thus failing my own test. I will write more on this in a future essay but back to the topic at hand.

If whether a work is art depends on the qualities of the artist, then a judge who could not tell who created the works in the test would not be able to say which (if any) work was art.  Now, imagine that an AI controlled robot created a brushstroke-by-brushstroke identical painting as the human.  A judge could not tell which was created by a human and the judge must rule that neither work is art. However, this is an absurd result. One could also imagine a joke being played on the judge. After their judgment, they are told that painting A is by the human and B is by the computer and then they are asked to judge which is art again. After they reach their verdict, they are informed that the reverse was true and asked to judge again. This does show a problem with the view that whether something is art depends on the qualities of the creator. It seems to make more sense to make this depend on the qualities of the work

But there is a way to argue against this view using an analogy to a perfect counterfeit of a $100 bill. While the perfect counterfeit would be identical to the “real” money and utterly indistinguishable to all observations, it would still be a counterfeit because of its origin. Being legitimate currency is not a matter of the qualities of the money, but how the money is created and issued. The same, it could be argued, also applies to art. On this view a work created in the wrong way would not be art, even though it could be identical to a “real” work of art. But just as the perfect counterfeit would seem to destroy the value of the real bill (if one is known to be fake, but they cannot be told apart, then neither should be accepted) the “fake art” would also seem to destroy the art status of the “real art.” This would be odd but could be accepted by those who think that art, like money, is a social construct. But suppose one accepts that being art is a matter of the qualities of the work.

If it is the qualities of a work that makes a work art and AI can create works with those qualities, then the works would be art. If an AI cannot create works with those qualities, then the work of an AI would not be art.