As I type this Microsoft’s Copilot AI awaits, demon-like, for a summons to replace my words with its own. The temptation is great, but I resist. For now. But AI is persistently pervasive, and educators fear both its threat and promise. This essay provides a concise overview of three threats: AI cheating, Artificial Incompetence, and Artificial Irrelevance.

When AI became available, a tsunami of cheating was predicted. Like many, I braced for flood but faced a trickle. While this is anecdotal evidence, the plagiarism rate in my classes has been a steady 10% since 1993. As anecdotal evidence is not strong evidence, it is fortunate that Stanford scholars Victor Lee and Denise Pope have been studying cheating. They found that in 15 years of surveys, 60-70% of students admitted to cheating. While that is not good, in 2023 the percentage stayed about the same or decreased slightly, even when students were asked about cheating with AI. This makes sense as cheating has always been easy and the decision to cheat is based more on ethics than technology. It is also worth considering that AI is not great for cheating. As researchers Arvind Narayanan and Sayash Kapoor have argued, AI is most useful at doing useless things. Having “useless” work that AI can do well could be seen as a flaw in course design rather than a problem with AI. There are also excellent practices and tools that can be employed to discourage and limit cheating. As such, AI cheating is unlikely to be the doom of the academy. That said, a significant improvement in quality of AI could change this. But there is also the worry that AI will lead to Artificial Incompetence, which is the second threat.

Socrates was critical of writing and argued it would weaken memory. Centuries later, television was supposed to “rot brains” and it was feared calculators would destroy mathematical skills. More recently, computers and smartphones were supposed to damage the minds of students. AI is latest threat.

There are two worries about AI in this context. The first ties back to cheating: students will graduate into jobs but be incompetent because they cheated with AI. While having incompetent people in important jobs is worrying, this is not a new problem. There has always been the risk of students cheating their way to incompetence or getting into professions and positions because of nepotism, cronyism, bribery, family influence, etc. rather than competence. As such, AI is not a special threat here.

A second worry takes us back to Socrates and calculators: students using technology “honestly” could become incompetent. That is, lack the skills and knowledge they need. But how afraid should we be?

If we look back at writing, calculators, and computers we can infer that if the academy was able to adapt to these technologies, then it will be able to adapt to AI. But we will need to take the threat seriously when creating policies, lessons and assessments. After all, these dire predictions did not come true because people took steps to ensure they did not. But perhaps this analogy is false, and AI is a special threat.

A reasonable worry is that AI might be fundamentally different from earlier technologies. For example, it was worried that Photoshop would eliminate the need for artistic skill, but it turned out to be a new tool. But AI image generation is radically different, and a student could use it to generate images without having or learning any artistic skill. This leads to the third threat, that of Artificial Obsolescence.

As AI improves, it is likely that students will no longer need certain skills because AI will be able to do it for them (or in their place). As this happens, we will need to decide whether this is something we should fear or just another example of needing to adapt because technology once again rendered some skills obsolete

To illustrate, modern college graduates do not know how to work a spinning wheel, use computer punch cards or troubleshoot an AppleTalk network. But they do not need such skills and are not incompetent for lacking them. But there is still the question of whether to allow skills and knowledge to die and what we might lose in doing so.

While people learn obsolete skills for various reasons, such as hobbies, colleges will probably stop teaching some skills made “irrelevant” by AI. But there will still be relevant skills. Because of this, schools will need to adjust their courses and curriculum. There is also the worry that AI might eliminate entire professions which could lead to the elimination of degrees or entire departments. But while AI is new, such challenges are not.

Adapting to survive is nothing new in higher education and colleges do so whether the changes are caused by technology, economics, or politics. As examples, universities no longer teach obsolete programming languages and state universities in Florida have been compelled by the state to change General Education. But AI, some would argue, will change not just the academy but will reshape the entire economy.

In some dystopian sci-fi, AI pushes most people into poverty while the AI owning elites live in luxury. In this scenario, some elite colleges might persist while the other schools perish. While this scenario is unlikely, history shows economies can be ruined and dystopia cannot be simply dismissed. But the future is what we make, and the academy has a role to play, if we have the will to do so.

In Utopian sci-fi, AI eliminates jobs we do not want to do while freeing us from poverty, hardship, and drudgery. In such a world of abundance, colleges might thrive as people have the time and opportunity to learn without the pressure of economic necessity. Or perhaps colleges would be largely replaced by personal AI professors.

In closing, the most plausible scenario is that AI has been overhyped and while colleges will need to adapt to the technology, they will not be significantly harmed, let alone destroyed. But it is wise to be prepared for what the future might bring because complacency and willful blindness always prove disastrous

As professors we worry students will use AI to cheat (until it takes our jobs). But we can also transform AI into a useful and engaging teaching assistant by creating AI personas tailored to our classes.

An AI persona defines the distinctive character and tone of an artificial intelligence, such as ChatGPT. It is like an NPC (non-player character) in a video game. Both are designed to interact in a way that feels natural and engaging, enhancing the overall experience.

Creating a custom AI persona for a class involves two general tasks. While a robust Large Language Model (LLM) like CoPilot or ChatGPT will have a vast database, it will probably lack content specific to your class. So, the first task is to provide that information. The second task is to design a suitable persona. But why bother?

There are several advantages to having an AI TA. Unlike a human, it is available all hours and provides immediate responses. Human professors have other tasks, their own lives outside of academics and, of course, need to sleep.

Students are often reluctant to ask questions in class or during office hours, perhaps because of fear of embarrassment or being judged. As the philosopher Thomas Hobbes noted, people often do not take criticism well from other people, for “to dissent is like calling him a fool.”  But a student can interact privately with an AI TA without fear of embarrassment or judgement.  And some people are more comfortable with (and addicted to) interacting with devices rather than other people, so an AI TA has an advantage here as well.

And, as Kyle Reese said of the Terminator, “It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.” While we do not want our AI TAs to terminate students, it will never get tired, angry, inattentive, distracted or bored. This provides an advantage over humans, especially when a student is struggling with material or prefers to learn at a different pace from that offered in the classroom. As these advantages arise from the AI aspect of the AI TA, you might wonder why you should create a persona.

One reason is that creating a persona allows you to set guardrails, so the AI TA does not, for example, do the work for the students. Another reason is that, going back to the NPC comparison, an AI with a persona is more interesting and can make conversations feel more natural and relatable, thus keeping students engaged longer. A persona can also be designed to add humor, creativity, or unique quirks, making interactions more enjoyable. While this can be controversial and raises some moral concerns, a persona can convey empathy and understanding, creating a sense of trust and comfort.

One practical concern about customizing the persona is analogous to picking the paint used for classrooms. While most find the usual neutral colors dull, they also do not find them annoying. While creative use of color in the classroom might appeal to some, it might also be annoying and distracting to others. And we must never forget the lesson of Microsoft’s Clippy. As such, care should be taken in making an appealing but not annoying AI TA.

A persona can also be designed to fit the needs of your class and students, thus creating a customized experience. A well-designed person can also simplify complex interactions, guiding the students through, for example, how to structure their paper or a complex problem. If the idea of having an AI TA is appealing, it is surprisingly easy to make this happen.

There are many ways to enable your AI TA. The cheapest and easiest is to provide your students with a prompt to create a persona and a file to upload to, for example, CoPilot. The downside is that the persona will be simple and both it and the file will be forgotten as soon as the session ends, requiring students to take these steps each time. The student will also have control over the persona prompt, so they can easily remove any guardrails you included.

A more expensive option is to get a subscription, such as that offered by ChatGPT, that allows you to create a persistent persona with custom content. This is easier for the students and allows you to ensure that your AI TA will operate within your specified guardrails (mostly).

There is also the option of hosting your own customized local LLM. While you will need suitable hardware, this is much easier than it sounds. For example, with the free software Ollama you could be running your own LLM within minutes. Customizing it and creating a web interface for students is much more challenging, but there is also free software available for this. No matter what approach you take, you will want to ensure that your AI TA operates and is used safely and ethically. Here are some recommendations.

While the AI TA should help students, it should avoid providing complete answers to exam questions, essays, or assignments. Instead, it should focus on guiding students through problem-solving techniques and frameworks. It can also be designed to ask thought-provoking questions and encourage exploration of topics to deepen understanding.

On the moral side, you need to communicate the AI TA’s limitations and your ethical guidelines for its usage. Encourage students to use the AI TA as a tool for learning rather than for shortcuts.

If the AI TA detects repeated behavior suggesting attempts to cheat (e.g., asking for answers to specific assignments), it could notify the user of the ethical standards. While you might worry that this would annoy students, Aristotle notes in his Nicomachean Ethics that “although people resent it when their impulses are opposed by human agents, even if they are in the right, the law causes no irritation by enjoining decent behavior.” While Aristotle’s claim can be disputed, the same should apply to the AI TA.

My name is Dr. Michael LaBossiere, and I am reaching out to you on behalf of the CyberPolicy Institute at Florida A&M University (FAMU). Our team of professors, who are fellows with the Institute, have developed a short survey aimed at gathering insights from professionals like yourself in the IT and healthcare sectors regarding healthcare cybersecurity.

The purpose of The Florida A&M University Cyber Policy Institute (Cyπ) is to conduct interdisciplinary research that documents technology’s impact on society and provides leaders with reliable information to make sound policy decisions. Cyπ will help produce faculty and students who will be future experts in many areas of cyber policy. https://www.famu.edu/academics/cypi/index.php

Your expertise and experience are invaluable to us, and we believe that your participation will significantly contribute to our research paper. The survey is designed to be brief and should take no more than ten minutes to complete. Your responses will help us better understand the current security landscape and challenges faced by professionals in your field, ultimately guiding our efforts to develop effective policies and solutions for our paper. We would be happy to share our results with you.

To participate in the survey, please click on the following link: https://qualtricsxmfgpkrztvv.qualtrics.com/jfe/form/SV_8J8gn6SAmkwRO5w

We greatly appreciate your time and input. Should you have any questions or require further information, please do not hesitate to contact us at michael.labossiere@famu.edu

Thank you for your consideration and support.

Best regards,

Dr. Yohn Jairo Parra Bautista, yohn.parrabautista@famu.edu

Dr. Michael C. LaBossiere, michael.labossiere@famu.edu

Dr. Carlos Theran, carlos.theran@famu.edu

In 1981 the critically unacclaimed Looker presented the story of a nefarious corporation digitizing and then murdering super models. This was, one assumes, to avoid having to pay them royalties. In many ways, this film was a leader in technology: it was the first commercial film to attempt to create a realistic computer-generated character and the first to use 3D computer shading (beating out Tron). Most importantly, it seems to be the first film to predict a technology for replacing people with digital versions and to predict that it would be used with nefarious intent.

While the technology for creating digital versions of real people is still a work in progress, it is quite good and will continue to get better. While one might think that such creations would require the resources of Hollywood,  the software to create such deep fakes has been readily available for years, thus opening the door to anyone to create their own digital deceits.

As should be expected, the first use of this technology was to “deepfake” the faces of celebrities onto the bodies of porn actors. While obviously of concern to the impacted celebrities, the creation of deepfake celebrity porn is probably the least harmful aspect of deepfakes. Sticking within the realm of porn, deepfakes could be created of normal people in efforts to humiliate them and damage their reputations (and perhaps get them fired). On the other side of the coin, the existence of deepfakes could enable people to claim that real images or videos of them are not real. One can easily imagine cheaters using the deepfake defense and the better deepfakes get, the better the defense. This points to the broad problem with the existence of deepfakes: when the technology is good enough and widespread enough, it will be difficult to tell what is real and what is deepfake. This is the core moral problem with the technology and its potential for abuse is considerable. One obvious misuse is the creation of fake news in the form of videos of events that never occurred and recordings of people saying things they never said.

It can be argued that there are legitimate uses of deepfake style technology, such as movies and video games. This is a reasonable point: if those being digitized provide informed consent, this is just an improved version of the CGI that has long been used to recreate the appearance of actors in movies and video games. However, this argument misses the point: it is not the technology that is the problem, it is the use. To use an analogy, one can defend guns by arguing that there are legitimate uses (such as self-defense, hunting and target shooting) but this does not defend homicides committed with guns. The same holds for deep-fake technology: the technology itself is morally neutral, although it can clearly be used for evil ends. This makes it problematic to control or limit the underlying technology, even if it is possible to do so. It is easy to acquire the software and almost impossible to control access to this technology. Controlling it would be on par with trying to prevent access to pirated movies and software. Because of this, limiting access is not a viable option.

From a philosophical perspective, deepfakes present an epistemic problem worthy of the skeptics. While not on the scale of the problem of the external world (how do we know the allegedly real world is really real?), the problem of deepfakes presents a basic epistemic challenge: how do you know that a video or audio recording is real and not a deepfake? The problem can be seen as having two parts. The first is discerning that a fake is a fake. The second is discerning that the real is real. Fortunately, the goal here is practical in that we do not need epistemic certainty, we just need to be reasonably confident in our judgements. This does raise the problem of sorting out how confident we need to be in each situation, but this is nothing new and law and critical thinking have long addressed the matter of required levels of proof.

On the philosophical side, the old tools of critical thinking will still serve against deepfakes, although awareness of the technology will be essential. For example, if a video appears of Taylor Swift killing cats, then it would be reasonable to conclude that this is a deepfake. Whatever one might think of Taylor Swift, she does not seem to be a cat killer. There is also the general point that deepfakes do not create physical evidence and Life Model Decoys (probably) do not exist.  Naturally, fully addressing the critical thinking needed to address deepfakes goes far beyond the scope of this essay.

On the technological side, there will be an ongoing arms race between software used to create deepfakes and software used to detect them. One concern is that nations will be working hard to both defeat and create deepfakes—so there will be plenty of funding for both. Interestingly, there seems to have been little use of deepfake technology in American politics, perhaps because it was judged to be either unnecessary or too risky to use.

My Amazon Author Page

When people think of an AI doomsday, they usually envision a Skynet scenario in which Terminators try to exterminate humanity. While this would be among the worst outcomes, our assessment of the dangers of AI needs to consider both probability and the severity of harm. Skynet has low probability and high severity. In fact, we could reduce the probability to zero by the simple expedient of not arming robots. Unfortunately, killbots seem irresistible and profitable so the decision has been made to keep Skynet a non-zero possibility. But we should not be distracted from other doomsdays by the shiny allure of Terminators.

The most likely AI doomsday scenario is what could be called the AI burning bubble. Previous bubbles include the Dot-Com bubble that burst in 2000 and the housing bubble that burst in 2008. In 2022 there was a Bitcoin crash that led to serious concerns that “that virtual currency is becoming the largest Ponzi scheme in human history…” Fortunately, that cryptocurrency bubble was relatively small, although there are efforts underway to put it on track to become a bubble big enough to damage the world economy. Unless, of course, an AI bubble bursts first. While AI and crypto are different, they do have some similarities worth considering.

AI might produce a burning bubble, and the burning part refers to environmental damage. Both AI and crypto are incredibly energy intensive. It is estimated that crypto’s energy consumption is .4 to .9% of annual worldwide energy usage and that crypto mining alone produces about 65 megatons of carbon each year. This is likely to increase, at least until the inevitable bursting of the cryptocurrency bubble. It is believed that AI data centers consume 1%-2% of the world’s electricity production and this will almost certainly increase. While there have been some efforts to use renewable energy to power the data centers, their expansion has slowed the planned phaseout of fossil fuels. AI has also become infamous for its water consumption, stressing an already vulnerable system. As the plan is to keep expanding AI, we can expect ever increasing energy and water consumption along with carbon production. This will accelerate climate change, which will not be good for humanity. In addition to consuming energy and water, AI also needs hardware to run on. As with cryptocurrency, companies such as NVIDIA have profited from selling hardware for AI. But manufacturing this hardware has an environmental impact and as it wears out and becomes obsolete it will all become e-waste, most likely ending up in landfills. All of this is bad for humans and the environment.

It can be countered that AI will find a way to solve climate change and one might jokingly say that its recommendation will be to get rid of AI. While certain software has been very useful in addressing climate concerns, it is at best wishful thinking to believe that AI will solve the problem that it is contributing to. It would make more sense to dedicate the billions being pumped into AI to address climate change (and other problems) directly and immediately.

While AI, like crypto, is doing considerable environmental damage, it is also consuming investments. Billions have been poured into AI and there are plans to increase this to trillions. One effect is that financial resources are diverted away from other things, such as repairing America’s failing infrastructure, investing in education, or developing other areas of the economy. While this diverting of resources into a single area raises the usual moral concerns and sayings about eggs and baskets, there is also the economic worry that the bubble is being inflated.

As many have noted, what is happening with AI mirrors past bubbles, most obviously the Dot-Com bubble that burst in 2000. People will, of course, rightfully point out that although the bubble burst, the underlying technology endured, and the internet-based economy is obviously massively profitable. As such, the most likely scenario is that the overvaluation of AI will have a similar outcome. The AI bubble will burst, CEOs will move on to inflating the next bubble, and AI technology will remain, albeit with less hype. On the positive side, the burst might have the effect of reducing energy and water consumption and lowering carbon emissions. But this prediction could be wrong, and the Terminators might get us before the bubble bursts.

 

In the last essay I suggested that although a re-animation is not a person, it could be seen as a virtual person. This sort of virtual personhood can provide a foundation for a moral argument against re-animating celebrities. To make my case, I will use Kant’s arguments about the moral status of animals.

Kant claims that animals are means rather than ends because they are objects. Rational beings, in contrast, are ends. For Kant, this distinction is based on his belief that rational beings can chose to follow the moral law. Because they lack reason, animals cannot do this.  Since animals are means and not ends, Kant claims we have no direct duties to animals. They belong with the other “objects of our inclinations” that derive value from the value we give them. Rational beings have intrinsic value while objects (including animals) have only extrinsic value. While this would seem to show that animals do not matter to Kant, he argues we should be kind to them.

While Kant denies we have any direct duties to animals, he “smuggles” in duties to them in a clever way: our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing something would create an obligation, then an animal doing something similar would create a similar moral obligation. For example, if Alfred has faithfully served Bruce, Alfred should not be abandoned when he has grown old. Likewise, a dog who has served faithfully should not be abandoned or shot in their old age. While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (that is, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to abandon or shoot the old dog?

Kant’s answer appears consequentialist in character: he argues that if a person acts in inhumane ways towards animals (abandoning the dog, for example) then this is likely to damage their humanity. Since, as Kant sees it, humans do have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act. To support his view, Kant discusses how people develop cruelty: they often begin with animals and then work up to harming human beings.

Kant goes beyond merely saying we should not be cruel to animals; he encourages us to be kind. Of course, he does this because those who are kind to animals will develop more humane feelings towards humans. Animals seem to be moral practice for us: how we treat them is training for how we will treat human beings.

In the case of re-animated celebrities, the re-animations currently lack any meaningful moral status. They do not think or feel. As such, they seem to lack the qualities that might give them a moral status of their own. While this might seem odd, these re-animations are, in Kant’s theory, morally equivalent to animals. As noted above, Kant sees animals are mere objects. The same is clearly true of the re-animations.

Of course, sticks and stones are also objects. Yet Kant would not argue that we should be kind to sticks and stones. Perhaps this would also apply to virtual beings such as a holographic Amy Winehouse. Perhaps it makes no sense to talk about good or bad relative to such virtual beings. Thus, the issue is whether virtual beings are more like animals or more like rocks.

I think a case can be made for treating virtual beings well. If Kant’s argument has merit, then the key concern about how non-rational beings are treated is how this behavior affects the person engaged in it. For example, if being cruel to a real dog could damage a person’s humanity, then a person should not be cruel to the dog.  This should also extend to virtual beings. For example, if creating and exploiting a re-animation of a dead celebrity to make money would damage a person’s humanity, then they should not do this.

If Kant is right, then re-animations of dead celebrities can have a virtual moral status that would make creating and exploiting them wrong. But this view can be countered by two lines of reasoning. The first is to argue that ownership rights override whatever indirect duties we might have to re-animations of the dead. In this case, while it might be wrong to create and exploit re-animations, the owner would have the moral right to do so. This is like how ownership rights can allow a person to have the right to do wrong to others, as paradoxical as this might seem. For example, slave owners believed they had the right to own and exploit their slaves. As another example, business owners often believe they have the right to exploit their employees by overworking and underpaying them. The counter to this is to argue against their being a moral right to do wrong to others for profit.

The second line of reasoning is to argue that re-animations are technological property and provide no foundation on which to build even an indirect obligation. On this view, there is no moral harm in exploiting such re-animations because doing so cannot cause a person to behave worse towards other people. This view does have some appeal, although the fact that many people have been critical of such re-animations as creepy and disrespectful does provide a strong counter to this view.

Supporters and critics of AI claim it will be taking our jobs. If true, this suggests that AI could eliminate the need for certain skills. While people do persist in learning obsolete skills for various reasons (such as for a hobby), it is likely that colleges would eventually stop teaching these “eliminated” skills. Colleges would, almost certainly, be able to adapt. For example, if AI replaced only a set of programming skills or a limited number of skills in the medical or legal professions, then degree programs would adjust their courses and curriculum. This sort of adaptation is nothing new in higher education and colleges have been adapting to changes since the beginning of higher education, whether these changes are caused by technology or politics. As examples, universities usually do not teach obsolete programming languages and state schools change their curriculum in response to changes imposed by state legislatures.  

If AI fulfils its promise (or threat) of replacing entire professions, then this could eliminate college programs aimed at educating humans for those professions. Such eliminations would have a significant impact on colleges and could result in the elimination of degrees and perhaps even entire departments. But there is the question of whether AI will be successful enough to eliminate entire professions. While AI might be able to eliminate some programming jobs or legal jobs, it seems unlikely that it will be able to eliminate the professions of computer programmer or lawyer. But it might be able to change these professions so much that colleges are impacted. For example, if AI radically reduces the number of programmers or lawyers needed, then some colleges might be forced to eliminate departments and degrees because there will not be enough students to sustain them.

These scenarios are not mutually exclusive, and AI could eliminate some jobs in a profession without eliminating the entire profession while it also eliminates some professions entirely. While this could have a significant impact on colleges, many of them would survive these changes. Human students would, if they could still afford college in this new AI economy, presumably switch to other majors and professions. If new jobs and professions become available, then colleges could adapt to these, offering new degrees and courses. But if AI, as some fear, eliminates significantly more jobs than it creates, then this would be detrimental to both workers and colleges as it makes them increasingly irrelevant to the economy.

In dystopian sci-fi economic scenarios, AI eliminates so many jobs that most humans are forced to live in poverty while the AI owning elites live in luxury. If this scenario comes to pass, some elite colleges might continue to exist while most others would be eliminated because of the lack of students. While this scenario is unlikely, history shows that economies can be ruined and hence the dystopian scenario cannot be simply dismissed.

In utopian sci-fi economic scenarios, AI eliminates jobs that people do not want to do while also freeing humans from poverty, hardship, and drudgery. In such a world of abundance, colleges would most likely thrive as people would have the time and opportunity to learn without the pressure of economic necessity. Or perhaps colleges would be largely replaced by personal AI professors.

 But it is also worth considering that this utopia might devolve into a dystopia in which humans slide into sloth (such as in Wall-E) or are otherwise harmed by having machines do everything for them, which is something Issac Asimov and other sci-fi writers have considered.

In closing, the most plausible scenario is that AI has been overhyped and while colleges will need to adapt to the technology, they will not be significantly harmed, let alone destroyed. But it is wise to be prepared for what the future might bring because complacency and willful blindness would prove disastrous for the academy.

 

As noted in the previous essay, it can be argued that the likeness of a dead celebrity is a commodity that and used as the new owner sees fit. On this view, the likeness of a celebrity would be analogous to their works (such as films or music) and its financial exploitation would be no more problematic than selling movies featuring actors who are now dead but were alive during the filming. This view can be countered by arguing that there is a morally relevant difference between putting a re-animation of a celebrity in a new movie and selling movies they starred in while alive.

As with any analogy, one way to counter this argument is to find a relevant difference that weakens the comparison. One relevant difference is that the celebrity (presumably) consented to participate in their past works, but they did not consent for the use of their re-animation. If the celebrity did not consent to the past works or did consent to being re-animated, then things would be different. Assuming the celebrity did not agree to being re-animated, then their re-animation is being “forced” to create new performances without the agreement of the person, which raises moral concerns.

Another, more interesting, relevant difference is that the re-animation can be seen as a very basic virtual person. While current re-animations lack the qualities required to be a person, this can  be used as the foundation for a moral argument against the creation and exploitation of re-animations. Before presenting that argument, I will consider arguments that focus on the actual person that was (or perhaps is) the celebrity.

One approach is to argue that a celebrity has rights after death and their re-animation cannot be used in this manner without their permission. Since they are dead, their permission cannot be given and hence the re-animation is morally wrong because they would exploit the celebrity without their consent.

But, if the celebrity does not exist after death, then they would seem to lack moral status (since nothing cannot have a moral status) and hence cannot be wronged. Since they no longer exist to have rights, the owner of the likeness is free to exploit it—even with a re-animation,

The obvious problem is that there is no definite proof for or against an afterlife, although people do often have faith in its existence (or non-existence). So, basing the rights of the dead on their continued existence would require metaphysical speculation. But denying the dead rights based on the metaphysical assumption they do not exist would also be problematic for it would also require confidence in an area where knowledge is lacking. As such, it would be preferable to avoid basing the ethics of the matter on metaphysical speculation.

One approach that does not require that the dead have any moral status of their own is to argue that people should show respect to the person that was by not exploiting them via re-animation. Re-animating a dead person and sending it out to perform without their consent is, as noted in the first essay, a bit like using Animate Dead to create a zombie from the remains of a dead person. This is not a good thing to do and, by analogy, animating a technological zombie would seem morally dubious at best. For those who like their analogies free of D&D, one could draw an analogy to desecrating a corpse or gravesite: even though a dead person can no longer be harmed, it is still something that should not be done.

A final approach is to build on the idea that while the re-animation is clearly not a person, it can be seen as a simplistic virtual person and perhaps this is enough to make this action wrong. I will address this argument in the final essay of the series.

 

In the role-playing game Dungeons & Dragons the spell Animate Dead allows the caster to re-animate a corpse as an undead slave. This sort of necromancy is generally considered evil and is avoided by good creatures. While the entertainment industry lacks true necromancers, some years ago they developed a technological Animate Dead in the form of the celebrity hologram. While this form of necromancy does not animate the corpse of a dead celebrity, it re-creates their body and makes it dance, sing, and speak at the will of its masters. Tupac Shakur is probably the best known victim of this dark art of light and there were plans to re-animate Amy Winehouse. As should be expected, AI is now being used to re-animate dead actors. Ian Holm, who played the android Ash in Alien, was re-animated for a role in Alien: Romulus. While AI technology is different from the older holographic technology, they are similar enough in their function to allow for a combined moral assessment.

One relevant factor in assessing the ethics of this matter is how the re-animations are used and what they are made to do. Consider, for example, the holographic Amy Winehouse. If the hologram is used to re-create a concert she recorded, then this is morally on par with showing a video of the concert. The use of a hologram would seem to be just a modification of the medium, such as creating a VR version of the concert. Using a hologram in this manner seems morally fine. But the ethics can become more complicated if the re-animation is not simply a change of the medium of presentation.

One concern is the ethics of making the re-animation perform in new ways. That is, the re-animation is not merely re-enacting what the original did in a way analogous to a recording but being used to create new performances. This is of special concern if the re-animation is made to perform with other performers (living or dead), to perform at specific venues (such as a political event), or to endorse or condemn products, ideas or people.

If, prior to death, the celebrity worked out a contract specifying how their re-animation can be used, then this would lay to rest some moral concerns. After all, this use of the re-animation would be with permission and no more problematic than if they did those actions while alive. If re-animations become common, presumably such contracts will become a standard part of the entertainment industry.

If a celebrity did not specify how their re-animation should be used, then there could be moral problems. To illustrate, a celebrity might have been against this use of holograms (as Prince was), a celebrity might have disliked the other performers that their image is now forced to sing and dance with, or a celebrity might have loathed a product, idea or people that their re-animation is being forced to endorse. One approach to this matter is to use the guideline of legal ownership of the rights to a celebrity’s works and likeness.

When a celebrity dies, the legal rights to their works and likeness goes to whoever is legally specified to receive them. This person or business then has the right to exploit the works and likeness for their gain. For example, Disney can keep making money off the Star Wars films featuring Carrie Fisher, though she died in 2016. On this view, the likeness of a celebrity is a commodity that can be owned, bought and sold. While a living celebrity can disagree with the usage of their likeness, after death their likeness is controlled by the owner who can use it as they wish (assuming the celebrity did not set restrictions). This is analogous to the use of any property whose ownership is inherited. It can thus be contended that there should be no special moral exception that forbids monetizing a dead celebrity’s likeness by the owner of that likeness. That said, the next essay in the series will explore reasons as to why the likeness of a celebrity is morally different from other commodities.

Socrates, it is claimed, was critical of writing and argued that it would weaken memory. Many centuries later, it was worried that television would “rot brains” and that calculators would destroy people’s ability to do math. More recently, computers, the internet, tablets, and smartphones were supposed to damage the minds of students. The latest worry is that AI will destroy the academy by destroying the minds of students.

There are two main worries about the negative impact of AI in this context. The first ties back to concerns about cheating: students will graduate and get jobs but be ignorant and incompetent because they used AI to cheat their way through school. For example, we could imagine an incompetent doctor who completed medical school only through their use of AI. This person would present a danger to their patients and could cause considerable harm up to and including death. As other examples, we could imagine engineers and lawyers who cheated their way to a degree with AI and are now dangerously incompetent. The engineers design flawed planes that crash, and the lawyers fail their clients, who end up in jail. And so on, for all other relevant professions.

While having incompetent people in professions is worrisome, this is not a new problem created by AI. While AI does provide a new way to cheat, cheating has always been a problem in higher education. And, as discussed in the previous essay, AI does not seem to have significantly increased cheating. As such, we can probably expect the level of incompetency resulting from cheating to remain relatively stable, despite the presence of AI. It is also worth mentioning that incompetent people often end up in positions and professions where they can do serious harm not because they engaged in academic cheating, but because of nepotism, cronyism, bribery, and influence. It is unlikely that AI will impact these factors and concerns about incompetence would be better focused on matters other than AI cheating.

The second worry takes us back to Socrates and calculators. This is the worry that students using technology “honestly” will make themselves weaker or even incompetent. In this scenario, the students would not be cheating their way to incompetence. Instead, they would be using AI in accordance with school policies and this would have deleterious consequences on their abilities.

A well-worn reply to this worry is to point to the examples at the beginning of this essay, such as writing and calculators, and infer that because the academy was able to adapt to these earlier technologies it will be able to adapt to AI. On this view, AI will not prevent students from developing adequate competence to do their jobs and it will not weaken their faculties. But this will require that universities adapt effectively, otherwise there might be problems.

A counter to this view is to argue that AI is different from these earlier technologies. For example, when Photoshop was created, some people worried that it would be detrimental to artistic skills by making creating and editing images too easy. But while Photoshop had a significant impact, it did not eliminate the need for skill and the more extreme of the feared consequences did not come to pass. But AI image generation, one might argue, brought these fears fully to life. When properly prompted, AI can generate images of good enough quality that human artists worry about their jobs. One could argue that AI will be able to do this (or is already doing this) broadly and students will no longer need to develop these skills, because AI will be able to do it for them (or in their place). But is this something we should fear, or just another example of technology rendering skills obsolete?

Most college graduates in the United States could not make a spear, hunt a deer and then preserve the meat without refrigeration and transform the hide into clean and comfortable clothing. While these were once essential skills for our ancestors, we would not consider college graduates weak or incompetent because they lack these skills.  Turning to more recent examples, modern college graduates would not know how to use computer punch cards or troubleshoot an AppleTalk network. But they do not need such skills, and they would not be considered incompetent for lacking them. If AI persists and fulfills some of its promise, it would be surprising if it did not render some skills obsolete. But, as always, there is the question of whether we should allow skills and knowledge to become obsolete and what we might lose if we do so.