Some years ago, researchers created “minibrains” which are more formally known as cerebral organoids. As the name implies, a minibrain is a pea-sized collection of a few million human neural cells. A human brain consists of about 85 billion cells. These minibrains are usually created by transforming human skin cells.  As should be imagined, these minibrains raise ethical concerns.

One concern is that as minibrains are human neural masses, they could develop consciousness. Since moral status often rests on mental attributes, this means they might someday possess a moral status, perhaps even a near person status. The epistemic challenge is determining if they achieve that status. This is a version of the philosophical problem of other minds: how do I know that other entities feel or think as I do?  This problem also applies one of my favortie foods from my home state of Maine, the lobster.

Thanks to thinkers like Descartes, animals are often regarded as biological machines that lack minds. While scientists now see higher animals as capable of feeling and even thinking, lobsters are often seen as biological automatons that neither feel nor suffer. However, some people do think lobsters can suffer and Switzerland banned the boiling of live lobsters. The moral justification is that boiling lobsters alive is unnecessary suffering. Oddly enough, few take the next obvious moral step: if boiling them is wrong, then killing and eating them would also seem to be wrong.

I think that while lobsters are not mentally complex, they do feel pain. The reason for this is the same reason I think you feel pain: an argument by analogy. I know that I, as a living thing, feel pain and dislike it. I note that you, as another human, are like me. So, I infer that you also feel pain and probably dislike it. While lobsters are different from me, they do have some similarities: they are alive, they interact with their environment, they have nerves and so on. As such, they probably feel pain. It must be noted that there are those who deny that humans think or feel—so denying this of lobsters is not odd. Naturally, the relative simplicity of lobsters does suggest that they do not have a depth of feeling; but pain is probably one of the simplest feelings.

Moral concerns about minibrains and lobsters arise from their alleged ability (or potential) to feel. The epistemic concern is how to know this. As should come as no surprise, the same concerns arise about fetuses in the context of the abortion debate: the epistemic and moral problem is knowing when the zygote gains moral status. Obviously, if lobsters can have moral status, then fetuses would also get it rather early in the development cycle—at least at the point when they have a nervous system at least as complex as a lobster. As one would expect, people are often inconsistent in their moral views and some who might balk at eating a lobster might accept abortion, while someone horrified by abortion might happily devour the flesh of a slain baby cow.

In the case of minibrains, scientists want to use them for research. This can be morally justified on utilitarian grounds: the advantages gained from the possible suffering of the minibrains can be outweighed by the gains to science. In the case of lobsters, those who eat them would argue that their enjoyment in eating lobster outweighs the lobster’s suffering.

In both these cases, it is a matter of competing interests: the minibrains and lobsters would prefer to avoid suffering and death, while the humans want to experiment on them or eat them. The same reasoning also applies to abortion: there are competing interests between the woman who wishes to have an abortion and the possible interest of the fetus in not dying. While it can be contended that the fetus has no idea of interests, the same can be said of minibrains and lobsters. As such, the same moral reasoning can be applied in all three cases: it is competition between the interests of a fully developed person and an entity that is significantly inferior in capabilities. As such, the ethics of the minibrains seems to have already been addressed in terms of the ethics of how we treat animals and the ethics of abortion. This, of course, means that there is no resolution—but this is as expected.

 

 

While exoskeletons are being developed primarily for military, medical and commercial applications, they have obvious potential for use in play. For example, new sports might be created in which athletes wear exoskeletons to enable greater performance.

From a moral standpoint, the use of exoskeletons in sports designed for them raises no special issues. After all, the creation of motorized sports is as old as the motor and this territory is well known. As such, exoskeletons in sports designed for them are no different from the use of racing cars or motorcycles. In fact, exoskeleton racing is likely to be one of the first exoskeleton sports.

It is worth noting that exoskeletons could be added to existing sports such as cross-country running, track or football. But the idea of using mechanized technology in such sports doesn’t really break new ground. To illustrate, having runners compete while wearing exoskeletons would be like having bicyclists replace their pedaled bikes with electric bikes. This would simply create a new, mechanized sport.

Adding exoskeletons to existing sports could create safety problems. For example, American football with exoskeletons could be lethal. As another example, athletes running around a track with exoskeletons could result in serious collision injuries. However, these matters do not create new ethical territory. Issues of equipment and safety are old concerns and can be resolved for exoskeletons, most likely after some terrible accidents, using established moral principles about safe competition. For example, there are already principles governing the frequency and severity of tolerable injuries in sports that would also apply to exosports. Naturally, each sport does tend to have different levels of what is considered tolerable (football versus basketball, for example), so the specific details for these new sports will need to be sorted out. Another area of moral concern is the use of exoskeletons in cheating.

While current exoskeleton technology would impossible to hide during athletic competitions like running and biking, future exoskeletons could be hidden under clothing and could be used to cheat. While this would create a new way to cheat, it would not require the creation of any new ethical theory about cheating. After all, what matters most morally in cheating is the cheating, not the specific means used. As such, whether an athlete is getting an unfair edge with an exoskeleton, blood doping, performance enhancing drugs, or cutting the course they are cheating and hence doing something wrong.

While exoskeletons have yet to be used to cheat, there is already an established concept of the use of “technological fraud” in competition. The first apparent case appeared a few years ago, when a cyclist was accused of using a bike with a motor concealed in its frame. Since people had speculated about this possibility, there were already terms for it: “mechanical doping” and “bike doping.” Using a hidden exoskeleton would be analogous to using a hidden motor on a bike. The only difference is that the hidden motor directly enhances the bike while an exoskeleton for the biker would enhance them. But there is no moral difference whether the motor is enhancing the bike directly or enhancing the athlete.  As such, the ethics of cheating with an exoskeleton are already settled, even before exo-cheating has occurred.

One final, somewhat sci-fi, concern is the use of exoskeletons will weaken people. While a person must move to use an exoskeleton, the ones used for play will enhance a person’s abilities and do much of the work for them. Researchers are already talking about running at 20 MPH through the woods for hours without getting tired. While I admit that this sounds fun (aside from colliding with trees), a worry is that this would be more like riding a motorcycle (which does all the work) than riding a bike (which augments the effort).

An obvious reply is to point out that I myself made the obvious comparison to riding a motorcycle. The use of an exoskeleton would not be fundamentally different from riding a motorcycle through the woods and there is nothing wrong with that (on designated trails). This is a reasonable point and I have no more objection to people exorunning (in designated areas) for entertainment than I do to people riding motorcycles (in designated areas). However, I do worry that exoskeletons could make things too easy for people.

While things like mobility scooters do exist, an exoskeleton would go beyond them. After all, a full body exoskeleton would not only provide easy mobility, but also do the work for the person’s arms. While this would be a blessing for a person with a serious medical condition, it would enable otherwise healthy people to avoid even the small amount of exercise most people cannot avoid today (like walking from their car to work or a store).

The sensible reply to my concern is to point out that most people do not use mobility scooters to get around when they do not actually need them, so the same would hold true of exoskeletons (assuming they become as cheap as mobility scooters). However, given the impact of automobiles and other technology on fitness levels, it is worth having some concern about the harmful effects of exoskeletons making things too easy. Unlike a car, a person could wear their exoskeleton into their workplace or the store, avoiding all the need to walk on their own. While the movie WALL-E did not have exoskeletons, it did show the perils of technology that makes things far too easy for humans and it is worth keeping that in mind as a (fictional) cautionary tale.

 

Most players agree that playing tabletop role playing games (TTRPGs) should be fun. But at the table, and away from the table, people are subject to the false consensus effect. This is a cognitive bias that causes people to “see their own behavioral choices and judgments as relatively common and appropriate to existing circumstances.” In the case of D&D, a player will tend to think that everyone agrees with their view of fun. For example, a player who enjoys spending hours in deep role-playing encounters will assume everyone else is enjoying these encounters. But fun, at the table and away, is subjective. For example, I ran a 15k race yesterday and it was fun. Or, more accurately, it was fun for me. Football games are being televised today, but being forced to endure watching football is not fun. Or, as I should say, not fun for me.

Over the years, people have told me I am wrong about running being fun and watching football being awful. They are right and wrong: running is fun for me, not for them. Watching football is fun for them, awful for me. This is because fun, like food preferences, is subjective. What is fun is fun to you, but probably not everyone. What is fun for other people might not be fun for you. As with whether you like plantains or not, there is no right or wrong. There is only like or dislike.

While this might seem too obvious to be worth mentioning, the false consensus effect means that while we know that people enjoy different things, we still tend to assume that everyone at our table will share our view of what is fun in the game. For example, if you like minimal role-playing and maximum combat, you will tend to assume that is what everyone enjoys, and you probably wonder why other players sometimes won’t stop talking to the non-player characters (NPCs) when they should be killing them. This is why we need to remind ourselves that the other people at the table might have a different idea of what is fun, and we should make some effort to find out what that is. It is also worth mentioning that while there are selfish players who are at the table solely for their own fun, a person can appear selfish because they honestly think that everyone else is also enjoying their way of playing. For example, if a player loves going off on lengthy solo role-playing encounters in the game, they might think everyone else is enjoying this as an audience to their role playing. But the other players are probably bored and spend that time on their phones looking at memes. So, it is good to check in with the other people at the table and see if they are also having fun.

It seems reasonable to accept that players have a right to have fun at the table, and this is the approach I try to take a Dungeon Master (DM) and as a player. One should also remember that, just as with other rights, other people have the right to have fun as well. So, it is not that only I have the right to have fun, we all have that right. And, as with other rights, such as the right to free expression, this means that I should and must accept limits on my right so that other people can enjoy their right. For example, if I really enjoy combat, but I know that other players enjoy role-playing encounters in which they talk to the monsters, I should curb by murderous inclinations from time to time, so they won’t have to rely on speak with dead to talk to the monsters.

While, as noted above, fun is subjective, there are some things that are not fun for anyone. One example of this is being unable to act on your turn in D&D. At most tables, a player can spend minutes of time trying to sort out what they will do in seconds of combat. This might be because the situation is dire and complex, and the wrong choice could be the doom of the party. Or it could be because they do not know their character very well, they haven’t read up on their spells, or they were watching football when it wasn’t their turn. This means that if a player is unable to act on their turn, they might have to wait fifteen minutes or longer before they can do anything. While it can be fun to observe the combat, many players tend to check out when they cannot act. Phones, of course, have made this easy to do.

Since I know this, in my role as DM I try to ensure that a player is always able to take their turn. One key part of this involves being very restrained when deciding whether the monsters use spells or abilities that can “steal” a player’s turn (or turns). Spells like Sleep, Tasha’s Hideous Laughter, Hold Person, Hypnotic Pattern, and Banish can take a player character (PC) out of the action, leaving the player with nothing to do. Abilities that stun, paralyze, petrify or control a PC can also steal a player’s turn. And, obviously enough, knocking a PC to zero (or killing them) can steal a turn.

When judging whether to use such abilities, I make a quick estimate of how likely it is that it will steal a player’s turn. This involves some obvious things, like checking how likely it is the PC will be affected but also less obvious things, like how likely it is that another player will do something to save the PC if they are affected. If you, as a DM, know your party works together well, then you can use such things more often. For example, if you know that if the fighter gets held, the party will try to rescue them, then you can have the enemy cleric hit the fight with Hold Person, since they’ll probably not miss their turn, and the other players will have fun rescuing them. As with running the game in general, it is a balancing act in which the DM tries to present a meaningful, but fun, challenge. If you know the party can deal with “lose a turn” abilities and spells, then they can be safely used without risking killing a player’s fun. But if the party cannot or will not counter them, then it is best to avoid them. After all, the point of playing is to have fun and someone who is not playing is probably not having fun.

 

An exoskeleton is a powered frame that attaches to the body to provide support and strength. The movie Live, Die Repeat: Edge of Tomorrow featured combat exoskeletons. These fictional devices allow soldiers to run faster and longer while carrying heavier loads, giving them an advantage in combat. There are also peaceful applications of technology, such as allowing people with injuries to walk and augmenting human abilities for the workplace. For those concerned with fine details of nerdiness, exoskeletons should not be confused with cybernetic parts (these fully replace body parts, such as limbs or eyes) or powered armor (like that used in the novel Starship Troopers and by Iron Man).

As with any new technology, the development of exoskeletons raises ethical questions. Fortunately, humans have been using technological enhancements since we started being human, so this is familiar territory. Noel Sharkey raises one moral concern, namely that “You could have exoskeletons on building sites that would help people not get so physically tired, but working longer would make you mentally tired and we don’t have a means of stopping that.” His proposed solution is an exoskeleton that switches off after six hours.

A similar problem arose with earlier technology that reduced the physical fatigue of working. For example, the development of early factory and farming equipment allowed people to work longer hours and more efficiently. Modern technology has made such work even easier. For example, a worker can drive a high-tech farm combine as easily as driving a car.  Closer analogies to exoskeletons include such things as fork-lifts and cranes: a person can operate those to easily lift heavy loads that would be exhausting or impossible to do with mere muscles. So, Sharkey’s concern would also apply to the forklift: a person could drive one around for six hours and not be very tired physically yet become mentally tired. As such, whatever moral solutions applicable to the problem of forklifts also apply to exoskeletons.

Mental overwork is not a problem limited to exoskeletons or technology in general. After all, many jobs are not very physically tiring and people can keep writing legal briefs, teaching classes and managing workers to the point of mental exhaustion without being physically exhausted.

 For those who consider such overwork to be undesirable, the solution lies in workplace regulation or the (always vain) hope that employers will do the right thing. Without regulations protecting workers from being overworked, in the future employers would presumably either buy exoskeletons without timers or develop work-arounds, such as resetting timers.

Also, exoskeletons themselves do not get tired, so putting a timer on an exoskeleton would be like putting a use timer on a forklift. Doing so would reduce the value of the equipment, since it could not be used for multiple shifts. As such, that sort of timer system would be unfair to the employers in that they would be paying for equipment that should be usable round the clock but would instead be limited.  An easy fix would be a system linking the timer to the worker: the exoskeleton timer would reset when equipped by a new worker. This creates problems about incorporating work limits into hardware rather than by using regulation and policy about the limits of work. In any case, while exoskeletons would be new in the workplace, they add nothing new to the moral landscape. Technology that allows workers to be mentally overworked while not being physically overworked is nothing new and existing solutions can be applied if exoskeletons become part of the workplace, just as was done when forklifts were introduced.

In a tragic aircraft accident, sixty-seven people died. In response to past tragedies, presidents ranging from Reagan to Obama have endeavored to unite and comfort the American people. Trump intentionally decided to take a different approach and used the tragedy as an opportunity to advance his anti-DEI agenda.

While Trump acknowledged that the cause of the crash was unknown, he quickly blamed DEI. When a reporter asked him how he knew this, he asserted it was because he has common sense. He also claimed that the crash was the fault of Biden and Obama and that it might have been caused by hiring people with disabilities.

In one sense, Trump is right to blame past administrations. The federal government has allowed the quality of air traffic safety to decline, and one might trace this back to at least Reagan who famously fired the striking air traffic controllers. As with many areas concerned with the safety of the American people, there is a shortage of staff, chronic underfunding and a problem with obsolete technology. Past administrations (including Trump’s) and congress bear responsibility for this. So, I agree with Trump that past leaders bear some of the blame for the tragedy. But I disagree with his false DEI claim.

As is always the case, rational people spend time and energy trying to debunk and refute Trump’s false claims. While this should be done, there is the question of whether this has any practical effect in terms of changing minds. At this point, it seems certain that America is firmly divided between those who reject Trump’s lies and those who accept them or do not care that he is lying. But I’m all about the desperate fight against impossible odds, so here we go.

Trump’s claim that the crash was caused by diversity hires of people with disabilities is easy to debunk. The FAA has strict requirements for air traffic controllers and someone who was incapable of doing the job would not be hired. After all, being an air traffic controller is not like being a member of Trump’s cabinet. As others will point out, this baseless attack on people with disabilities echoes the Nazis.  Trump supporters will presumably respond to this criticism by saying that “liberals” always compare Trump to the Nazis. While some comparisons are overblown, there is a reason why this occurs so often. And that is because Trump and his henchmen are often at least Nazi adjacent. Proud American Nazis know this is true and wish that their fellows had more courage. So, the questions “why do the libs always compare Trump and his henchmen to Nazis?” and “why do Nazis like Trump and his henchmen?” have the same answer. Meanwhile, the “normies” are baffled and the mainstream media generates think pieces debating the obvious. But what about Trump’s DEI claims?

One problem with engaging with these DEI claims is that the engagement provides them with a degree of legitimacy they do not deserve. Doing so can create the impression that there is a meaningful debate with two equally plausible sides. As many others have pointed out, when Trump and his ilk talk about DEI, this is just a dog whistle to the racists and sexists. These bigots know exactly what he means as do the anti-racists; but they disagree about whether bigotry is good. As to why Trump and his ilk bother with dog whistles, there seem to be two reasons.

One is that being openly racist or sexist is seen as crude and impolite. Polite bigots use dog whistles in public, reserving their open racism and sexism for private conversations. People can also convince themselves that they are good because they are not openly using racist or sexist terms.

The other is that there are non-bigots who cannot hear the dog whistle and believe, in good faith ignorance, that DEI might be the cause of these problems. If pressed, they will deny being racist or sexist and will claim that DEI might arise from good intentions but is bad because it puts incompetent people into jobs that are not qualified for. And hence things go wrong. If they are asked about why these people are assumed to be incompetent and whether women, minorities, old people, and people with disabilities can be competent, they will usually grow uncomfortable and want to change to topic. These people are still in play. While the bigots want to recruit them using dog whistles to onboard them into bigotry, they will settle for them remaining cooperatively neutral. If a “normie” expresses doubt about charges of racism or sexism or defends attacks on DEI, this provides cover and support for the bigots, and they are happy to exploit this cover. But “normies” are potential recruits to the side of good, since they have a mild dislike of racism and sexism that can be appealed to. One challenge is convincing them to hear the dog whistles for what they are. This is difficult, since it requires acknowledging their own past complicity in racism and sexism while also facing uncomfortable truths about politicians and pundits they might like and support.

The danger in trying to win over the “normies” is that one must engage with the DEI claims made by Trump and his fellows, which (as mentioned above) runs the risk of lending them legitimacy by creating the appearance that there is something to debate. But it seems that the only way to reveal the truth is to engage with the lies, as risky as that might be.

As a philosopher, my preference is to use good logic and plausible claims when arguing. After all, the goal is truth, and this is the correct approach. However, logic is awful as a means of persuasion and engaging people with facts is challenging because for every fact there seems to be a thousand appealing lies. But there might be some people who can be persuaded by the fact that DEI is not to blame for the crash nor is it to blame for the other things, such as wildfires, that the right likes to blame on it. That said, the core of the fight is one of values.

For someone to believe that DEI results in the hiring of incompetent people, they must believe that white, straight men have a monopoly on competence and that everyone else is inferior to a degree that they are unsuitable for many jobs. So, one way to engage with a possible “normie” about DEI is to ask them what they have in their hearts: do they feel that only straight, white men are truly competent and that everyone else is inferior and suitable only for race and gender “appropriate” roles? If they do not find this bigotry in their hearts, there is hope for them.

While I sometimes get incredulous stares when I say this, hunters are usually advocates of conservation. Cynical folks might think this is so they can keep killing animals. This is obviously at least part of their motivation: hunters enjoy hunting and without animals, there is no hunting. However, it would be unfair to claim that hunters are driven only by a selfish desire to hunt.  I grew up hunting and have met many hunters who are concerned about conservation in general and not just for their own interest in hunting animals. While the true motives of hunters are relevant to assessing their character, the ethics of hunting for conservation is another issue. This issue is perhaps best addressed on utilitarian grounds: does allowing the hunting of animals and charging for such things as hunting licenses create more good or evil consequences?

In the United States, this sort of hunting is morally acceptable. After all, hunters of all political views support preserving public lands and willingly pay fees they know to help fund conservation efforts. Human hunters help check game populations, especially deer, that would suffer from the harms of overpopulation (such as starvation). That said, there are counterarguments against this view, such as pointing out that human hunters wiped out many predators that kept deer populations in check and that it would be preferable to restore these animals than rely on humans.

More controversial than game hunting is trophy hunting. While all hunters can take trophies, trophy hunting is aimed at acquiring a trophy, such as a head, tusks, or hide. The goal in a trophy hunt is the prestige of the kill, rather than getting meat or for the challenge of hunting. Of special concern is trophy hunting in Africa.

A key concern about such hunts is that the animals tend to be at risk or even endangered, such as big cats, elephants and rhinos. Trophy hunting in Africa is mostly domain done by the wealthy because foreigners pay to hunt their desired animal and must be able to afford the cost of travel and hunting. This money, so the argument in favor of trophy hunting goes, is used to support conservation efforts and incentivize the locals in conservation efforts.

From a moral standpoint, this argument can be cast in utilitarian terms: while the killing of rare or endangered animals is a negative consequence, this is offset by the money used for conservation and the economic gain to the country. The moral balancing act involves weighing the dead animals against the good that is supposed to arise from their deaths. This takes us to the factual matter of money.

One point of practical concern is corruption: does the money go to conservation and to the locals, or does it get directed elsewhere, such as the bank accounts of corrupt officials? If the money does not actually go to conservation, then the conservation argument fails.

Another point of practical concern is whether the money is enough to support the conservation efforts. If  the money gained does not conserve more animals than are killed by the hunters, then the conservation argument would also fail. This raises the question of whether there are enough animals to kill and enough left over to conserve. In the case of abundant species, the answer could easily be yes. In the case of endangered species, killing them to save them has less plausibility.

In addition to the utilitarian calculation that weighs the dead animals against the alleged benefits, there is also the worry about the ethics of trophy hunting itself, perhaps in the context of a different ethical theory. For example, a deontologist like Kant might contend that killing animals for trophies would be wrong regardless of the allegedly good consequences. Virtue theorists might, as another example, take issue with the impact of such trophy hunting on the person’s character. After all, the way many trophy hunts are conducted  involve people other than the “hunter” doing the actual hunting. The hunter just pulls the trigger once their shot is lined up for them.  As such, it is not really trophy hunting for the “hunter” and is better described as trophy shooting.

To use an analogy, imagine a rich person hires a team to play basketball for him. When the players get a free throw, he marches out onto the court to take the shot. This is playing basketball in the same sense that trophy hunting is hunting. That is to say, just barely.  

 

In the last essay I suggested that although a re-animation is not a person, it could be seen as a virtual person. This sort of virtual personhood can provide a foundation for a moral argument against re-animating celebrities. To make my case, I will use Kant’s arguments about the moral status of animals.

Kant claims that animals are means rather than ends because they are objects. Rational beings, in contrast, are ends. For Kant, this distinction is based on his belief that rational beings can chose to follow the moral law. Because they lack reason, animals cannot do this.  Since animals are means and not ends, Kant claims we have no direct duties to animals. They belong with the other “objects of our inclinations” that derive value from the value we give them. Rational beings have intrinsic value while objects (including animals) have only extrinsic value. While this would seem to show that animals do not matter to Kant, he argues we should be kind to them.

While Kant denies we have any direct duties to animals, he “smuggles” in duties to them in a clever way: our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing something would create an obligation, then an animal doing something similar would create a similar moral obligation. For example, if Alfred has faithfully served Bruce, Alfred should not be abandoned when he has grown old. Likewise, a dog who has served faithfully should not be abandoned or shot in their old age. While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (that is, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to abandon or shoot the old dog?

Kant’s answer appears consequentialist in character: he argues that if a person acts in inhumane ways towards animals (abandoning the dog, for example) then this is likely to damage their humanity. Since, as Kant sees it, humans do have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act. To support his view, Kant discusses how people develop cruelty: they often begin with animals and then work up to harming human beings.

Kant goes beyond merely saying we should not be cruel to animals; he encourages us to be kind. Of course, he does this because those who are kind to animals will develop more humane feelings towards humans. Animals seem to be moral practice for us: how we treat them is training for how we will treat human beings.

In the case of re-animated celebrities, the re-animations currently lack any meaningful moral status. They do not think or feel. As such, they seem to lack the qualities that might give them a moral status of their own. While this might seem odd, these re-animations are, in Kant’s theory, morally equivalent to animals. As noted above, Kant sees animals are mere objects. The same is clearly true of the re-animations.

Of course, sticks and stones are also objects. Yet Kant would not argue that we should be kind to sticks and stones. Perhaps this would also apply to virtual beings such as a holographic Amy Winehouse. Perhaps it makes no sense to talk about good or bad relative to such virtual beings. Thus, the issue is whether virtual beings are more like animals or more like rocks.

I think a case can be made for treating virtual beings well. If Kant’s argument has merit, then the key concern about how non-rational beings are treated is how this behavior affects the person engaged in it. For example, if being cruel to a real dog could damage a person’s humanity, then a person should not be cruel to the dog.  This should also extend to virtual beings. For example, if creating and exploiting a re-animation of a dead celebrity to make money would damage a person’s humanity, then they should not do this.

If Kant is right, then re-animations of dead celebrities can have a virtual moral status that would make creating and exploiting them wrong. But this view can be countered by two lines of reasoning. The first is to argue that ownership rights override whatever indirect duties we might have to re-animations of the dead. In this case, while it might be wrong to create and exploit re-animations, the owner would have the moral right to do so. This is like how ownership rights can allow a person to have the right to do wrong to others, as paradoxical as this might seem. For example, slave owners believed they had the right to own and exploit their slaves. As another example, business owners often believe they have the right to exploit their employees by overworking and underpaying them. The counter to this is to argue against their being a moral right to do wrong to others for profit.

The second line of reasoning is to argue that re-animations are technological property and provide no foundation on which to build even an indirect obligation. On this view, there is no moral harm in exploiting such re-animations because doing so cannot cause a person to behave worse towards other people. This view does have some appeal, although the fact that many people have been critical of such re-animations as creepy and disrespectful does provide a strong counter to this view.

Supporters and critics of AI claim it will be taking our jobs. If true, this suggests that AI could eliminate the need for certain skills. While people do persist in learning obsolete skills for various reasons (such as for a hobby), it is likely that colleges would eventually stop teaching these “eliminated” skills. Colleges would, almost certainly, be able to adapt. For example, if AI replaced only a set of programming skills or a limited number of skills in the medical or legal professions, then degree programs would adjust their courses and curriculum. This sort of adaptation is nothing new in higher education and colleges have been adapting to changes since the beginning of higher education, whether these changes are caused by technology or politics. As examples, universities usually do not teach obsolete programming languages and state schools change their curriculum in response to changes imposed by state legislatures.  

If AI fulfils its promise (or threat) of replacing entire professions, then this could eliminate college programs aimed at educating humans for those professions. Such eliminations would have a significant impact on colleges and could result in the elimination of degrees and perhaps even entire departments. But there is the question of whether AI will be successful enough to eliminate entire professions. While AI might be able to eliminate some programming jobs or legal jobs, it seems unlikely that it will be able to eliminate the professions of computer programmer or lawyer. But it might be able to change these professions so much that colleges are impacted. For example, if AI radically reduces the number of programmers or lawyers needed, then some colleges might be forced to eliminate departments and degrees because there will not be enough students to sustain them.

These scenarios are not mutually exclusive, and AI could eliminate some jobs in a profession without eliminating the entire profession while it also eliminates some professions entirely. While this could have a significant impact on colleges, many of them would survive these changes. Human students would, if they could still afford college in this new AI economy, presumably switch to other majors and professions. If new jobs and professions become available, then colleges could adapt to these, offering new degrees and courses. But if AI, as some fear, eliminates significantly more jobs than it creates, then this would be detrimental to both workers and colleges as it makes them increasingly irrelevant to the economy.

In dystopian sci-fi economic scenarios, AI eliminates so many jobs that most humans are forced to live in poverty while the AI owning elites live in luxury. If this scenario comes to pass, some elite colleges might continue to exist while most others would be eliminated because of the lack of students. While this scenario is unlikely, history shows that economies can be ruined and hence the dystopian scenario cannot be simply dismissed.

In utopian sci-fi economic scenarios, AI eliminates jobs that people do not want to do while also freeing humans from poverty, hardship, and drudgery. In such a world of abundance, colleges would most likely thrive as people would have the time and opportunity to learn without the pressure of economic necessity. Or perhaps colleges would be largely replaced by personal AI professors.

 But it is also worth considering that this utopia might devolve into a dystopia in which humans slide into sloth (such as in Wall-E) or are otherwise harmed by having machines do everything for them, which is something Issac Asimov and other sci-fi writers have considered.

In closing, the most plausible scenario is that AI has been overhyped and while colleges will need to adapt to the technology, they will not be significantly harmed, let alone destroyed. But it is wise to be prepared for what the future might bring because complacency and willful blindness would prove disastrous for the academy.

 

As noted in the previous essay, it can be argued that the likeness of a dead celebrity is a commodity that and used as the new owner sees fit. On this view, the likeness of a celebrity would be analogous to their works (such as films or music) and its financial exploitation would be no more problematic than selling movies featuring actors who are now dead but were alive during the filming. This view can be countered by arguing that there is a morally relevant difference between putting a re-animation of a celebrity in a new movie and selling movies they starred in while alive.

As with any analogy, one way to counter this argument is to find a relevant difference that weakens the comparison. One relevant difference is that the celebrity (presumably) consented to participate in their past works, but they did not consent for the use of their re-animation. If the celebrity did not consent to the past works or did consent to being re-animated, then things would be different. Assuming the celebrity did not agree to being re-animated, then their re-animation is being “forced” to create new performances without the agreement of the person, which raises moral concerns.

Another, more interesting, relevant difference is that the re-animation can be seen as a very basic virtual person. While current re-animations lack the qualities required to be a person, this can  be used as the foundation for a moral argument against the creation and exploitation of re-animations. Before presenting that argument, I will consider arguments that focus on the actual person that was (or perhaps is) the celebrity.

One approach is to argue that a celebrity has rights after death and their re-animation cannot be used in this manner without their permission. Since they are dead, their permission cannot be given and hence the re-animation is morally wrong because they would exploit the celebrity without their consent.

But, if the celebrity does not exist after death, then they would seem to lack moral status (since nothing cannot have a moral status) and hence cannot be wronged. Since they no longer exist to have rights, the owner of the likeness is free to exploit it—even with a re-animation,

The obvious problem is that there is no definite proof for or against an afterlife, although people do often have faith in its existence (or non-existence). So, basing the rights of the dead on their continued existence would require metaphysical speculation. But denying the dead rights based on the metaphysical assumption they do not exist would also be problematic for it would also require confidence in an area where knowledge is lacking. As such, it would be preferable to avoid basing the ethics of the matter on metaphysical speculation.

One approach that does not require that the dead have any moral status of their own is to argue that people should show respect to the person that was by not exploiting them via re-animation. Re-animating a dead person and sending it out to perform without their consent is, as noted in the first essay, a bit like using Animate Dead to create a zombie from the remains of a dead person. This is not a good thing to do and, by analogy, animating a technological zombie would seem morally dubious at best. For those who like their analogies free of D&D, one could draw an analogy to desecrating a corpse or gravesite: even though a dead person can no longer be harmed, it is still something that should not be done.

A final approach is to build on the idea that while the re-animation is clearly not a person, it can be seen as a simplistic virtual person and perhaps this is enough to make this action wrong. I will address this argument in the final essay of the series.

 

In the role-playing game Dungeons & Dragons the spell Animate Dead allows the caster to re-animate a corpse as an undead slave. This sort of necromancy is generally considered evil and is avoided by good creatures. While the entertainment industry lacks true necromancers, some years ago they developed a technological Animate Dead in the form of the celebrity hologram. While this form of necromancy does not animate the corpse of a dead celebrity, it re-creates their body and makes it dance, sing, and speak at the will of its masters. Tupac Shakur is probably the best known victim of this dark art of light and there were plans to re-animate Amy Winehouse. As should be expected, AI is now being used to re-animate dead actors. Ian Holm, who played the android Ash in Alien, was re-animated for a role in Alien: Romulus. While AI technology is different from the older holographic technology, they are similar enough in their function to allow for a combined moral assessment.

One relevant factor in assessing the ethics of this matter is how the re-animations are used and what they are made to do. Consider, for example, the holographic Amy Winehouse. If the hologram is used to re-create a concert she recorded, then this is morally on par with showing a video of the concert. The use of a hologram would seem to be just a modification of the medium, such as creating a VR version of the concert. Using a hologram in this manner seems morally fine. But the ethics can become more complicated if the re-animation is not simply a change of the medium of presentation.

One concern is the ethics of making the re-animation perform in new ways. That is, the re-animation is not merely re-enacting what the original did in a way analogous to a recording but being used to create new performances. This is of special concern if the re-animation is made to perform with other performers (living or dead), to perform at specific venues (such as a political event), or to endorse or condemn products, ideas or people.

If, prior to death, the celebrity worked out a contract specifying how their re-animation can be used, then this would lay to rest some moral concerns. After all, this use of the re-animation would be with permission and no more problematic than if they did those actions while alive. If re-animations become common, presumably such contracts will become a standard part of the entertainment industry.

If a celebrity did not specify how their re-animation should be used, then there could be moral problems. To illustrate, a celebrity might have been against this use of holograms (as Prince was), a celebrity might have disliked the other performers that their image is now forced to sing and dance with, or a celebrity might have loathed a product, idea or people that their re-animation is being forced to endorse. One approach to this matter is to use the guideline of legal ownership of the rights to a celebrity’s works and likeness.

When a celebrity dies, the legal rights to their works and likeness goes to whoever is legally specified to receive them. This person or business then has the right to exploit the works and likeness for their gain. For example, Disney can keep making money off the Star Wars films featuring Carrie Fisher, though she died in 2016. On this view, the likeness of a celebrity is a commodity that can be owned, bought and sold. While a living celebrity can disagree with the usage of their likeness, after death their likeness is controlled by the owner who can use it as they wish (assuming the celebrity did not set restrictions). This is analogous to the use of any property whose ownership is inherited. It can thus be contended that there should be no special moral exception that forbids monetizing a dead celebrity’s likeness by the owner of that likeness. That said, the next essay in the series will explore reasons as to why the likeness of a celebrity is morally different from other commodities.