In a tragic aircraft accident, sixty-seven people died. In response to past tragedies, presidents ranging from Reagan to Obama have endeavored to unite and comfort the American people. Trump intentionally decided to take a different approach and used the tragedy as an opportunity to advance his anti-DEI agenda.

While Trump acknowledged that the cause of the crash was unknown, he quickly blamed DEI. When a reporter asked him how he knew this, he asserted it was because he has common sense. He also claimed that the crash was the fault of Biden and Obama and that it might have been caused by hiring people with disabilities.

In one sense, Trump is right to blame past administrations. The federal government has allowed the quality of air traffic safety to decline, and one might trace this back to at least Reagan who famously fired the striking air traffic controllers. As with many areas concerned with the safety of the American people, there is a shortage of staff, chronic underfunding and a problem with obsolete technology. Past administrations (including Trump’s) and congress bear responsibility for this. So, I agree with Trump that past leaders bear some of the blame for the tragedy. But I disagree with his false DEI claim.

As is always the case, rational people spend time and energy trying to debunk and refute Trump’s false claims. While this should be done, there is the question of whether this has any practical effect in terms of changing minds. At this point, it seems certain that America is firmly divided between those who reject Trump’s lies and those who accept them or do not care that he is lying. But I’m all about the desperate fight against impossible odds, so here we go.

Trump’s claim that the crash was caused by diversity hires of people with disabilities is easy to debunk. The FAA has strict requirements for air traffic controllers and someone who was incapable of doing the job would not be hired. After all, being an air traffic controller is not like being a member of Trump’s cabinet. As others will point out, this baseless attack on people with disabilities echoes the Nazis.  Trump supporters will presumably respond to this criticism by saying that “liberals” always compare Trump to the Nazis. While some comparisons are overblown, there is a reason why this occurs so often. And that is because Trump and his henchmen are often at least Nazi adjacent. Proud American Nazis know this is true and wish that their fellows had more courage. So, the questions “why do the libs always compare Trump and his henchmen to Nazis?” and “why do Nazis like Trump and his henchmen?” have the same answer. Meanwhile, the “normies” are baffled and the mainstream media generates think pieces debating the obvious. But what about Trump’s DEI claims?

One problem with engaging with these DEI claims is that the engagement provides them with a degree of legitimacy they do not deserve. Doing so can create the impression that there is a meaningful debate with two equally plausible sides. As many others have pointed out, when Trump and his ilk talk about DEI, this is just a dog whistle to the racists and sexists. These bigots know exactly what he means as do the anti-racists; but they disagree about whether bigotry is good. As to why Trump and his ilk bother with dog whistles, there seem to be two reasons.

One is that being openly racist or sexist is seen as crude and impolite. Polite bigots use dog whistles in public, reserving their open racism and sexism for private conversations. People can also convince themselves that they are good because they are not openly using racist or sexist terms.

The other is that there are non-bigots who cannot hear the dog whistle and believe, in good faith ignorance, that DEI might be the cause of these problems. If pressed, they will deny being racist or sexist and will claim that DEI might arise from good intentions but is bad because it puts incompetent people into jobs that are not qualified for. And hence things go wrong. If they are asked about why these people are assumed to be incompetent and whether women, minorities, old people, and people with disabilities can be competent, they will usually grow uncomfortable and want to change to topic. These people are still in play. While the bigots want to recruit them using dog whistles to onboard them into bigotry, they will settle for them remaining cooperatively neutral. If a “normie” expresses doubt about charges of racism or sexism or defends attacks on DEI, this provides cover and support for the bigots, and they are happy to exploit this cover. But “normies” are potential recruits to the side of good, since they have a mild dislike of racism and sexism that can be appealed to. One challenge is convincing them to hear the dog whistles for what they are. This is difficult, since it requires acknowledging their own past complicity in racism and sexism while also facing uncomfortable truths about politicians and pundits they might like and support.

The danger in trying to win over the “normies” is that one must engage with the DEI claims made by Trump and his fellows, which (as mentioned above) runs the risk of lending them legitimacy by creating the appearance that there is something to debate. But it seems that the only way to reveal the truth is to engage with the lies, as risky as that might be.

As a philosopher, my preference is to use good logic and plausible claims when arguing. After all, the goal is truth, and this is the correct approach. However, logic is awful as a means of persuasion and engaging people with facts is challenging because for every fact there seems to be a thousand appealing lies. But there might be some people who can be persuaded by the fact that DEI is not to blame for the crash nor is it to blame for the other things, such as wildfires, that the right likes to blame on it. That said, the core of the fight is one of values.

For someone to believe that DEI results in the hiring of incompetent people, they must believe that white, straight men have a monopoly on competence and that everyone else is inferior to a degree that they are unsuitable for many jobs. So, one way to engage with a possible “normie” about DEI is to ask them what they have in their hearts: do they feel that only straight, white men are truly competent and that everyone else is inferior and suitable only for race and gender “appropriate” roles? If they do not find this bigotry in their hearts, there is hope for them.

While I sometimes get incredulous stares when I say this, hunters are usually advocates of conservation. Cynical folks might think this is so they can keep killing animals. This is obviously at least part of their motivation: hunters enjoy hunting and without animals, there is no hunting. However, it would be unfair to claim that hunters are driven only by a selfish desire to hunt.  I grew up hunting and have met many hunters who are concerned about conservation in general and not just for their own interest in hunting animals. While the true motives of hunters are relevant to assessing their character, the ethics of hunting for conservation is another issue. This issue is perhaps best addressed on utilitarian grounds: does allowing the hunting of animals and charging for such things as hunting licenses create more good or evil consequences?

In the United States, this sort of hunting is morally acceptable. After all, hunters of all political views support preserving public lands and willingly pay fees they know to help fund conservation efforts. Human hunters help check game populations, especially deer, that would suffer from the harms of overpopulation (such as starvation). That said, there are counterarguments against this view, such as pointing out that human hunters wiped out many predators that kept deer populations in check and that it would be preferable to restore these animals than rely on humans.

More controversial than game hunting is trophy hunting. While all hunters can take trophies, trophy hunting is aimed at acquiring a trophy, such as a head, tusks, or hide. The goal in a trophy hunt is the prestige of the kill, rather than getting meat or for the challenge of hunting. Of special concern is trophy hunting in Africa.

A key concern about such hunts is that the animals tend to be at risk or even endangered, such as big cats, elephants and rhinos. Trophy hunting in Africa is mostly domain done by the wealthy because foreigners pay to hunt their desired animal and must be able to afford the cost of travel and hunting. This money, so the argument in favor of trophy hunting goes, is used to support conservation efforts and incentivize the locals in conservation efforts.

From a moral standpoint, this argument can be cast in utilitarian terms: while the killing of rare or endangered animals is a negative consequence, this is offset by the money used for conservation and the economic gain to the country. The moral balancing act involves weighing the dead animals against the good that is supposed to arise from their deaths. This takes us to the factual matter of money.

One point of practical concern is corruption: does the money go to conservation and to the locals, or does it get directed elsewhere, such as the bank accounts of corrupt officials? If the money does not actually go to conservation, then the conservation argument fails.

Another point of practical concern is whether the money is enough to support the conservation efforts. If  the money gained does not conserve more animals than are killed by the hunters, then the conservation argument would also fail. This raises the question of whether there are enough animals to kill and enough left over to conserve. In the case of abundant species, the answer could easily be yes. In the case of endangered species, killing them to save them has less plausibility.

In addition to the utilitarian calculation that weighs the dead animals against the alleged benefits, there is also the worry about the ethics of trophy hunting itself, perhaps in the context of a different ethical theory. For example, a deontologist like Kant might contend that killing animals for trophies would be wrong regardless of the allegedly good consequences. Virtue theorists might, as another example, take issue with the impact of such trophy hunting on the person’s character. After all, the way many trophy hunts are conducted  involve people other than the “hunter” doing the actual hunting. The hunter just pulls the trigger once their shot is lined up for them.  As such, it is not really trophy hunting for the “hunter” and is better described as trophy shooting.

To use an analogy, imagine a rich person hires a team to play basketball for him. When the players get a free throw, he marches out onto the court to take the shot. This is playing basketball in the same sense that trophy hunting is hunting. That is to say, just barely.  

 

In the last essay I suggested that although a re-animation is not a person, it could be seen as a virtual person. This sort of virtual personhood can provide a foundation for a moral argument against re-animating celebrities. To make my case, I will use Kant’s arguments about the moral status of animals.

Kant claims that animals are means rather than ends because they are objects. Rational beings, in contrast, are ends. For Kant, this distinction is based on his belief that rational beings can chose to follow the moral law. Because they lack reason, animals cannot do this.  Since animals are means and not ends, Kant claims we have no direct duties to animals. They belong with the other “objects of our inclinations” that derive value from the value we give them. Rational beings have intrinsic value while objects (including animals) have only extrinsic value. While this would seem to show that animals do not matter to Kant, he argues we should be kind to them.

While Kant denies we have any direct duties to animals, he “smuggles” in duties to them in a clever way: our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing something would create an obligation, then an animal doing something similar would create a similar moral obligation. For example, if Alfred has faithfully served Bruce, Alfred should not be abandoned when he has grown old. Likewise, a dog who has served faithfully should not be abandoned or shot in their old age. While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (that is, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to abandon or shoot the old dog?

Kant’s answer appears consequentialist in character: he argues that if a person acts in inhumane ways towards animals (abandoning the dog, for example) then this is likely to damage their humanity. Since, as Kant sees it, humans do have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act. To support his view, Kant discusses how people develop cruelty: they often begin with animals and then work up to harming human beings.

Kant goes beyond merely saying we should not be cruel to animals; he encourages us to be kind. Of course, he does this because those who are kind to animals will develop more humane feelings towards humans. Animals seem to be moral practice for us: how we treat them is training for how we will treat human beings.

In the case of re-animated celebrities, the re-animations currently lack any meaningful moral status. They do not think or feel. As such, they seem to lack the qualities that might give them a moral status of their own. While this might seem odd, these re-animations are, in Kant’s theory, morally equivalent to animals. As noted above, Kant sees animals are mere objects. The same is clearly true of the re-animations.

Of course, sticks and stones are also objects. Yet Kant would not argue that we should be kind to sticks and stones. Perhaps this would also apply to virtual beings such as a holographic Amy Winehouse. Perhaps it makes no sense to talk about good or bad relative to such virtual beings. Thus, the issue is whether virtual beings are more like animals or more like rocks.

I think a case can be made for treating virtual beings well. If Kant’s argument has merit, then the key concern about how non-rational beings are treated is how this behavior affects the person engaged in it. For example, if being cruel to a real dog could damage a person’s humanity, then a person should not be cruel to the dog.  This should also extend to virtual beings. For example, if creating and exploiting a re-animation of a dead celebrity to make money would damage a person’s humanity, then they should not do this.

If Kant is right, then re-animations of dead celebrities can have a virtual moral status that would make creating and exploiting them wrong. But this view can be countered by two lines of reasoning. The first is to argue that ownership rights override whatever indirect duties we might have to re-animations of the dead. In this case, while it might be wrong to create and exploit re-animations, the owner would have the moral right to do so. This is like how ownership rights can allow a person to have the right to do wrong to others, as paradoxical as this might seem. For example, slave owners believed they had the right to own and exploit their slaves. As another example, business owners often believe they have the right to exploit their employees by overworking and underpaying them. The counter to this is to argue against their being a moral right to do wrong to others for profit.

The second line of reasoning is to argue that re-animations are technological property and provide no foundation on which to build even an indirect obligation. On this view, there is no moral harm in exploiting such re-animations because doing so cannot cause a person to behave worse towards other people. This view does have some appeal, although the fact that many people have been critical of such re-animations as creepy and disrespectful does provide a strong counter to this view.

Supporters and critics of AI claim it will be taking our jobs. If true, this suggests that AI could eliminate the need for certain skills. While people do persist in learning obsolete skills for various reasons (such as for a hobby), it is likely that colleges would eventually stop teaching these “eliminated” skills. Colleges would, almost certainly, be able to adapt. For example, if AI replaced only a set of programming skills or a limited number of skills in the medical or legal professions, then degree programs would adjust their courses and curriculum. This sort of adaptation is nothing new in higher education and colleges have been adapting to changes since the beginning of higher education, whether these changes are caused by technology or politics. As examples, universities usually do not teach obsolete programming languages and state schools change their curriculum in response to changes imposed by state legislatures.  

If AI fulfils its promise (or threat) of replacing entire professions, then this could eliminate college programs aimed at educating humans for those professions. Such eliminations would have a significant impact on colleges and could result in the elimination of degrees and perhaps even entire departments. But there is the question of whether AI will be successful enough to eliminate entire professions. While AI might be able to eliminate some programming jobs or legal jobs, it seems unlikely that it will be able to eliminate the professions of computer programmer or lawyer. But it might be able to change these professions so much that colleges are impacted. For example, if AI radically reduces the number of programmers or lawyers needed, then some colleges might be forced to eliminate departments and degrees because there will not be enough students to sustain them.

These scenarios are not mutually exclusive, and AI could eliminate some jobs in a profession without eliminating the entire profession while it also eliminates some professions entirely. While this could have a significant impact on colleges, many of them would survive these changes. Human students would, if they could still afford college in this new AI economy, presumably switch to other majors and professions. If new jobs and professions become available, then colleges could adapt to these, offering new degrees and courses. But if AI, as some fear, eliminates significantly more jobs than it creates, then this would be detrimental to both workers and colleges as it makes them increasingly irrelevant to the economy.

In dystopian sci-fi economic scenarios, AI eliminates so many jobs that most humans are forced to live in poverty while the AI owning elites live in luxury. If this scenario comes to pass, some elite colleges might continue to exist while most others would be eliminated because of the lack of students. While this scenario is unlikely, history shows that economies can be ruined and hence the dystopian scenario cannot be simply dismissed.

In utopian sci-fi economic scenarios, AI eliminates jobs that people do not want to do while also freeing humans from poverty, hardship, and drudgery. In such a world of abundance, colleges would most likely thrive as people would have the time and opportunity to learn without the pressure of economic necessity. Or perhaps colleges would be largely replaced by personal AI professors.

 But it is also worth considering that this utopia might devolve into a dystopia in which humans slide into sloth (such as in Wall-E) or are otherwise harmed by having machines do everything for them, which is something Issac Asimov and other sci-fi writers have considered.

In closing, the most plausible scenario is that AI has been overhyped and while colleges will need to adapt to the technology, they will not be significantly harmed, let alone destroyed. But it is wise to be prepared for what the future might bring because complacency and willful blindness would prove disastrous for the academy.

 

As noted in the previous essay, it can be argued that the likeness of a dead celebrity is a commodity that and used as the new owner sees fit. On this view, the likeness of a celebrity would be analogous to their works (such as films or music) and its financial exploitation would be no more problematic than selling movies featuring actors who are now dead but were alive during the filming. This view can be countered by arguing that there is a morally relevant difference between putting a re-animation of a celebrity in a new movie and selling movies they starred in while alive.

As with any analogy, one way to counter this argument is to find a relevant difference that weakens the comparison. One relevant difference is that the celebrity (presumably) consented to participate in their past works, but they did not consent for the use of their re-animation. If the celebrity did not consent to the past works or did consent to being re-animated, then things would be different. Assuming the celebrity did not agree to being re-animated, then their re-animation is being “forced” to create new performances without the agreement of the person, which raises moral concerns.

Another, more interesting, relevant difference is that the re-animation can be seen as a very basic virtual person. While current re-animations lack the qualities required to be a person, this can  be used as the foundation for a moral argument against the creation and exploitation of re-animations. Before presenting that argument, I will consider arguments that focus on the actual person that was (or perhaps is) the celebrity.

One approach is to argue that a celebrity has rights after death and their re-animation cannot be used in this manner without their permission. Since they are dead, their permission cannot be given and hence the re-animation is morally wrong because they would exploit the celebrity without their consent.

But, if the celebrity does not exist after death, then they would seem to lack moral status (since nothing cannot have a moral status) and hence cannot be wronged. Since they no longer exist to have rights, the owner of the likeness is free to exploit it—even with a re-animation,

The obvious problem is that there is no definite proof for or against an afterlife, although people do often have faith in its existence (or non-existence). So, basing the rights of the dead on their continued existence would require metaphysical speculation. But denying the dead rights based on the metaphysical assumption they do not exist would also be problematic for it would also require confidence in an area where knowledge is lacking. As such, it would be preferable to avoid basing the ethics of the matter on metaphysical speculation.

One approach that does not require that the dead have any moral status of their own is to argue that people should show respect to the person that was by not exploiting them via re-animation. Re-animating a dead person and sending it out to perform without their consent is, as noted in the first essay, a bit like using Animate Dead to create a zombie from the remains of a dead person. This is not a good thing to do and, by analogy, animating a technological zombie would seem morally dubious at best. For those who like their analogies free of D&D, one could draw an analogy to desecrating a corpse or gravesite: even though a dead person can no longer be harmed, it is still something that should not be done.

A final approach is to build on the idea that while the re-animation is clearly not a person, it can be seen as a simplistic virtual person and perhaps this is enough to make this action wrong. I will address this argument in the final essay of the series.

 

In the role-playing game Dungeons & Dragons the spell Animate Dead allows the caster to re-animate a corpse as an undead slave. This sort of necromancy is generally considered evil and is avoided by good creatures. While the entertainment industry lacks true necromancers, some years ago they developed a technological Animate Dead in the form of the celebrity hologram. While this form of necromancy does not animate the corpse of a dead celebrity, it re-creates their body and makes it dance, sing, and speak at the will of its masters. Tupac Shakur is probably the best known victim of this dark art of light and there were plans to re-animate Amy Winehouse. As should be expected, AI is now being used to re-animate dead actors. Ian Holm, who played the android Ash in Alien, was re-animated for a role in Alien: Romulus. While AI technology is different from the older holographic technology, they are similar enough in their function to allow for a combined moral assessment.

One relevant factor in assessing the ethics of this matter is how the re-animations are used and what they are made to do. Consider, for example, the holographic Amy Winehouse. If the hologram is used to re-create a concert she recorded, then this is morally on par with showing a video of the concert. The use of a hologram would seem to be just a modification of the medium, such as creating a VR version of the concert. Using a hologram in this manner seems morally fine. But the ethics can become more complicated if the re-animation is not simply a change of the medium of presentation.

One concern is the ethics of making the re-animation perform in new ways. That is, the re-animation is not merely re-enacting what the original did in a way analogous to a recording but being used to create new performances. This is of special concern if the re-animation is made to perform with other performers (living or dead), to perform at specific venues (such as a political event), or to endorse or condemn products, ideas or people.

If, prior to death, the celebrity worked out a contract specifying how their re-animation can be used, then this would lay to rest some moral concerns. After all, this use of the re-animation would be with permission and no more problematic than if they did those actions while alive. If re-animations become common, presumably such contracts will become a standard part of the entertainment industry.

If a celebrity did not specify how their re-animation should be used, then there could be moral problems. To illustrate, a celebrity might have been against this use of holograms (as Prince was), a celebrity might have disliked the other performers that their image is now forced to sing and dance with, or a celebrity might have loathed a product, idea or people that their re-animation is being forced to endorse. One approach to this matter is to use the guideline of legal ownership of the rights to a celebrity’s works and likeness.

When a celebrity dies, the legal rights to their works and likeness goes to whoever is legally specified to receive them. This person or business then has the right to exploit the works and likeness for their gain. For example, Disney can keep making money off the Star Wars films featuring Carrie Fisher, though she died in 2016. On this view, the likeness of a celebrity is a commodity that can be owned, bought and sold. While a living celebrity can disagree with the usage of their likeness, after death their likeness is controlled by the owner who can use it as they wish (assuming the celebrity did not set restrictions). This is analogous to the use of any property whose ownership is inherited. It can thus be contended that there should be no special moral exception that forbids monetizing a dead celebrity’s likeness by the owner of that likeness. That said, the next essay in the series will explore reasons as to why the likeness of a celebrity is morally different from other commodities.

Socrates, it is claimed, was critical of writing and argued that it would weaken memory. Many centuries later, it was worried that television would “rot brains” and that calculators would destroy people’s ability to do math. More recently, computers, the internet, tablets, and smartphones were supposed to damage the minds of students. The latest worry is that AI will destroy the academy by destroying the minds of students.

There are two main worries about the negative impact of AI in this context. The first ties back to concerns about cheating: students will graduate and get jobs but be ignorant and incompetent because they used AI to cheat their way through school. For example, we could imagine an incompetent doctor who completed medical school only through their use of AI. This person would present a danger to their patients and could cause considerable harm up to and including death. As other examples, we could imagine engineers and lawyers who cheated their way to a degree with AI and are now dangerously incompetent. The engineers design flawed planes that crash, and the lawyers fail their clients, who end up in jail. And so on, for all other relevant professions.

While having incompetent people in professions is worrisome, this is not a new problem created by AI. While AI does provide a new way to cheat, cheating has always been a problem in higher education. And, as discussed in the previous essay, AI does not seem to have significantly increased cheating. As such, we can probably expect the level of incompetency resulting from cheating to remain relatively stable, despite the presence of AI. It is also worth mentioning that incompetent people often end up in positions and professions where they can do serious harm not because they engaged in academic cheating, but because of nepotism, cronyism, bribery, and influence. It is unlikely that AI will impact these factors and concerns about incompetence would be better focused on matters other than AI cheating.

The second worry takes us back to Socrates and calculators. This is the worry that students using technology “honestly” will make themselves weaker or even incompetent. In this scenario, the students would not be cheating their way to incompetence. Instead, they would be using AI in accordance with school policies and this would have deleterious consequences on their abilities.

A well-worn reply to this worry is to point to the examples at the beginning of this essay, such as writing and calculators, and infer that because the academy was able to adapt to these earlier technologies it will be able to adapt to AI. On this view, AI will not prevent students from developing adequate competence to do their jobs and it will not weaken their faculties. But this will require that universities adapt effectively, otherwise there might be problems.

A counter to this view is to argue that AI is different from these earlier technologies. For example, when Photoshop was created, some people worried that it would be detrimental to artistic skills by making creating and editing images too easy. But while Photoshop had a significant impact, it did not eliminate the need for skill and the more extreme of the feared consequences did not come to pass. But AI image generation, one might argue, brought these fears fully to life. When properly prompted, AI can generate images of good enough quality that human artists worry about their jobs. One could argue that AI will be able to do this (or is already doing this) broadly and students will no longer need to develop these skills, because AI will be able to do it for them (or in their place). But is this something we should fear, or just another example of technology rendering skills obsolete?

Most college graduates in the United States could not make a spear, hunt a deer and then preserve the meat without refrigeration and transform the hide into clean and comfortable clothing. While these were once essential skills for our ancestors, we would not consider college graduates weak or incompetent because they lack these skills.  Turning to more recent examples, modern college graduates would not know how to use computer punch cards or troubleshoot an AppleTalk network. But they do not need such skills, and they would not be considered incompetent for lacking them. If AI persists and fulfills some of its promise, it would be surprising if it did not render some skills obsolete. But, as always, there is the question of whether we should allow skills and knowledge to become obsolete and what we might lose if we do so.

Microsoft’s Copilot AI awaits, demon-like, for my summons so that it might replace my words with its own. The temptation is great, but I resist and persist in relying on my own skills. But some warn that others will lack my resolve, and the academy will be destroyed by a deluge of cheating.

Those profiting from AI, including those selling software promising to detect AI cheating, speak dire warnings of the dangers of AI and how it will surpass humans in skills such as writing and taking tests. Because of this, the regulations written by the creators of AI must become law and academic institutions must subscribe to AI detection tools. And, of course, embrace AI themselves. While AI does present a promise and a threat, there is the question of whether it will destroy the academy as we know it. The first issue I will address is whether AI cheating will “destroy” the academy.

Students, I suspect, have been cheating since the first test and plagiarism has presumably existed since the invention of language. Before the internet, plagiarism and detecting plagiarism involved finding physical copies of works. As computers and the internet were developed, digital plagiarism and detection evolved. For example, many faculty use Turnitin which can detect plagiarism. It seemed that students might have been losing the plagiarism arms race, but it was worried that easy access to AI would turn the battle in favor of the cheating students.  After all, AI makes cheating easy, affordable and harder to detect.  For example, large language models allow “plagiarism on demand” by generating new text with each prompt. As I write this, Microsoft has made Copilot part of its office subscription and as many colleges provide the office programs to their students, they are handing students tools for cheating. But has AI caused the predicted flood of cheating?

Determining how many students are cheating is like determining how many people are committing crime: you only know how many people have been caught or admitted to it. You do not know how many people are  doing it. Because of this, inferences about how many students are cheating need to be made with caution so as to  avoid the fallacy of overconfident inference from unknown statistics.

One source of data is Turnitin’s AI detection software. Over the course of a year, the service checked 200 million assignments and flagged AI use in 1 in 10 assignments while 3 in 100 were flagged as mostly AI. These results have remained stable, suggesting that AI cheating is neither a tsunami nor increasing. But this assumes that the AI detection software is accurate. Turnitin claims it has a false positive rate of 1%. But we need to worry about AI detection software generating false positives and false negatives.

For false positives, one concern is that  “GPT detectors are biased against non-native English writers.” For false negatives, the worry is that AI detectors can be fooled. As the algorithms used in proprietary detection software are kept secret, we do not know what biases and defects they might have. For educators, the “nightmare” scenario is AI generated work that cannot be detected by software and evades traditional means of proving that cheating has occurred.

While I do worry about the use of AI in cheating, I do not think that AI will significantly increase cheating and that if the academy has survived older methods of cheating, it will survive this new tool. This is because I think that cheating has and will remain consistent. In terms of my anecdotal evidence, I have been a philosophy professor since 1993 and have a consistent plagiarism rate of about 10%. When AI cheating became available, I did not see a spike in cheating. Instead, I saw AI being used by some students in place of traditional methods of cheating. But I must note that this is my experience and that it is possible that AI generated papers are slipping past Turnitin. Fortunately, I do not need to rely on my experience and can avail myself of the work of experts on cheating.

Stanford scholars Victor Lee and Denise Pope have been studying cheating, and past surveys over 15 years showed that 60-70% of students admitted to cheating. In 2023 the percentage stayed about the same or decreased slightly, even when students were asked about using AI. While cheaters might lie about cheating, Pope and Lee use methods to address this challenge. While cheating remains a problem, AI has not increased it and hence reports of the death of the academy are premature. It will, more likely, die by another hand.

This lack of increase makes intuitive sense as cheating has always been easy and the decision to cheat is more a matter of moral and practical judgment rather than being driven by technology. While technology provides new means of cheating, a student must be willing to cheat, and that percentage seems stable. But it is worth considering that there might have been a wave of AI cheating but for efforts to counter it, to not consider this would be to fall for the prediction fallacy.

It is also worth considering that AI has not lived up to the hype because it is not a great tool for cheating. As Arvind Narayanan and Sayash Kapoor have argued, AI is most useful at doing useless things. To be fair, assignments in higher education can be useless. But if AI is being used to complete useless assignments, this is a problem with the assignments (and the professors) and not AI.

 But large language models are a new technology and their long-term impact in cheating needs to be determined. Things could change in ways that do result in the predicted flood and the doom of the academy as we know it. In closing, while AI cheating will probably not destroy the academy, we should not become complacent. Universities should develop AI policies based on reliable evidence. A good starting point would be collecting data from faculty about the likely extent of AI cheating.

While the ideals of higher education are often presented as being above the concerns of mere money, there is nothing inherently wrong with for-profit colleges. Unless, of course, there is something inherently wrong with for-profit businesses in general. So, it should not be assumed that a for-profit college must be bad, ripping students off, or providing useless degrees.  That said, the poor reputation of the for-profit colleges is well earned.

One tempting argument against for-profit colleges is that by being for-profit they must always charge students more or offer less than comparable non-profit colleges.  After all, as the argument could go, a for-profit college would need to do all that a non-profit does and still make a profit on top of that. This would need to be done by charging more or offering less for the same money. However, this need not be the case.

Non-profit and public colleges are now often top-heavy in terms of administrators and administrative salaries. They also spend lavishly on amenities, sports teams and such. These “extras” are all things that a well-run for-profit college could cut while still offering the core service of a college, namely education. For students who do not want the extras or who would rather not help fund the administrators, this can be a win-win scenario: the student gets the education they want for less than they would pay elsewhere and the college owners’ profit by being efficient. This is the dreamworld ideal of capitalism.

Sadly, the actual world is usually a nightmare: for profit schools often turn out as one would expect: predatory and terrible. One reason for this is that they are focused on making as much profit as possible and this consistently leads to the usual bad behavior endemic to the for-profit approach. While regulation is supposed to keep the bad behavior in check, in the last Trump administration Betsy DeVos curtailed oversight of these colleges. As a specific example, her department stopped cooperating with New Jersey on the fraudulent activities of for-profit colleges. Trump’s second administration is likely to be even more permissive. If the state neglects to check bad behavior, then people are limited only by their own values, and it is generally a bad idea to leave important matters up to the conscience. For example, it would be foolish for the state to hand out welfare by trusting everyone and never verifying their claims. Likewise, it would be foolish to allow for-profit colleges to do as they wish without proper oversight.

As should be expected, I have been against the terrible for-profit colleges. I also extend my opposition to terrible non-profits and terrible public colleges: what I am against is the terrible part, not the profit part. As with much bad behavior that harms others, the most plausible solution is to have and enforce laws against that bad behavior. Conservatives who are concerned about welfare fraud are not content to rely on the conscience of the recipients nor are they willing to simply allow an invisible hand to ensure that things work out properly. They, obviously enough, favor the creation and enforcement of laws to prevent people from committing this fraud. By parity of reasoning, for-profit colleges cannot be expected to operate virtuously with only the conscience of their owners as their guide. The invisible hand cannot be counted on to ensure that they do not engage in fraud and other misdeeds. What is needed, obviously enough, is the enforcement of the laws designed to protect taxpayers and students from being defrauded by the unscrupulous.

It could be argued that while the invisible hand and conscience cannot work in the case of, for example, welfare cheats, they work in the context of business. In the case of for-profit schools, one might argue they will fail if they do not behave, and the free market will sort things out. The easy and obvious reply is to agree that the bad colleges do fail, the problem is that they do a lot of damage to the students and taxpayers in the process. This is a bit like arguing that society does not need laws, since eventually vigilantes might take care of thieves and murderers. As Hobbes noted, the state of nature does not work terribly well.

This is not to say that I believe for-profits should be strangled by bureaucracy. Rather, the laws and enforcement need to focus on preventing harm like fraud. If a business model cannot succeed without including fraud and other misdeeds, then there is clearly a problem with that model.

One common approach to restricting abortion is to push the time limit ever closer to conception.  One moral argument for this is based on the claim that at a specific time the fetus has qualities that grant it a moral status such that killing it is immoral. While some claim this moment is conception, most find this implausible since a single cell lacks qualities that could confer a suitable moral status.

From a moral and pragmatic standpoint, it makes more sense to select a time when the fetus has qualities that are intuitively relevant to its moral status. These include such qualities as being able to feel pain, the ability to respond to stimuli and presenting other evidence of mental activity. While political opponents of abortion rarely advance philosophically rigorous arguments, the case is easy enough to make. One obvious approach is utilitarian: the fetus’ capacity to suffer means it counts in the moral calculation of abortion. The o problem with this approach is that the fetus has far less capacity than the mother and thus her interests would always seem to outweigh those of the fetus. This is analogous to similar arguments about the treatment of animals: if the woman’s interests outweigh those of a fetus that can feel pain, then the same would hold true when the interest of a human conflicts with that of an animal. If the utilitarian approach is adopted to argue in defense of fetuses, then moral consistency would require that the same argument be applied to animals with qualities equal to (or greater than) those possessed by a fetus. This would include many animals that are used for food, such as chickens, cows and pigs. As such, if abortion should be restricted based on the qualities of the fetus, then the killing of animals also be restricted. If abortion should be legally restricted on these moral grounds, then vegetarianism should also be imposed by law. That said, consistency seems to be of n concern in politics. While utilitarianism does have considerable appeal, there are alternatives.

A common alternative to utilitarianism is deontology, the view that actions are inherently right or wrong regardless of the consequences. On this view, killing a being that has the right sort of moral status would always be wrong and utilitarian calculations are irrelevant. So, if the fetus has the right sort of status at a specific stage of development, then it would be wrong to kill the fetus. As with the utilitarian argument, this would seem to open the moral door for animals as well. For example, if the capacity to feel pain grants the fetus a moral status that forbids killing it and justifies legally restricting abortion, then animals that feel pain must also be granted the same status and legal protection. Put roughly, if abortion should be restricted because fetuses feel pain, then meat consumption should be restricted because animals feel pain. Any arguments advanced involving painless killing of animals to morally justify meat consumption would also justify painless abortion. Obviously enough, moral arguments that contend that it is acceptable to kill animals because they are inferior to us would also apply to developing fetuses.

It must be noted that the above discussion is focused on moral arguments involving the claim that the fetus has moral status because it has certain empirically testable qualities, such as the capacity to feel pain. While arguments based on non-empirical qualities (such arguments based on the soul) can be advanced, they would take the discussion beyond ethics and into metaphysics. That said, there does seem to be an obvious way to restrict abortion based on empirical qualities while consistently avoiding extending the same legal protection to animals.

Fetuses are, obviously enough, biologically human. Thus, it could be argued that this is the key moral difference between humans and animals that would justify restricting abortion while still allowing animals to be used as food. One problem with this approach is that if being human is what matters, then appeals to other qualities are irrelevant. So, the argument should simply be that killing humans is wrong, fetuses are human, so killing them is wrong. While this is an option, it does abandon the appeal to qualities argument. This is an approach that some anti-abortion people use; although they also use the qualities argument as a persuasive device (even when they would not apply it to animals).

It could be argued that it is a combination of being human and the other relevant qualities that give the fetus its special status—this would allow for abortion before the fetus has those qualities while also denying animals with those qualities an analogous moral status. The challenge is showing what it is about being human that makes the moral difference. While humans are humans, it can also be said that cows are cows and the question remains as to what it is about being human that makes the moral difference. If it is a quality, then that quality can be pointed to. If it is mere species membership, then that seems utterly arbitrary and unprincipled—mere speciesism. As such, it would seem that any arguments designed to restrict abortion based on the empirical qualities of fetuses would also apply to animals possessing equal or greater qualities. If the legal restriction of abortion based on the appeal to qualities is justified, the same justification would require legal restrictions on killing animals. Roughly put, legally restricting abortion in a consistent way would effectively require legally mandating vegetarianism.