https://famu.zoom.us/meeting/register/kPbbUjbsTWayeb7ceb3HTw#/registration

On April 8, 2026 I’ll be participating in a debate on the question “will AI destroy higher education?” I’m taking the “no” side. It takes place on Zoom from 12:00-1:00 PM Eastern and you can register (free) here: Meeting Registration – Zoom.

As this is being written, I’m scheduled to debate whether AI will destroy higher education. I’m arguing that it will not and what follows is how I will make my initial case.

In supporting my position, I have optimistic and pessimistic arguments (although your perspective on optimism might differ from mine. I’ll begin with my optimistic arguments, the first two of which are analogical arguments.

One way that AI might destroy higher education is by making students, broadly speaking, incompetent. While the exact scenarios vary, the idea is that using or depending on AI will weaken the minds of students and thus doom higher education. Fortunately, this is an ancient argument that has repeatedly been disproven. Socrate, it is claimed, worried that writing would weaken minds. More recently, TV, calculators, computers and even the dreaded Walkman were supposed to reduce the youth to dunces. None of these dire predictions came to pass and, by analogy, we can conclude that AI will not make the youth into fools.

A related concern is that AI will destroy higher education by rendering it obsolete though radical economic change. While scenarios vary, the worry is that higher education will no longer be needed because AI will eliminate certain jobs. While AI might result in radical change, this is also nothing new and higher education will adapt, by analogy, as it has done in the past. This will be an evolutionary event rather than a mass extinction.

My third optimistic argument is in response to worries about cheating. While AI does provide a radical new way to cheat, cheating remains a moral (and practical) choice and is not inherently a technological problem. Good ethical training and practical methods can address this threat, allowing higher education to survive.

My fourth optimistic argument, which is unrealistic and idealistic, is to content that AI might succeed and bring about a “Star Trek” utopia in which an abundance of wealth means that higher education will thrive as people will have the time and resources to learn for the sake of learning. I put the odds of this even with my various AI kills us all scenarios. Now, on to the pessimistic arguments.

One pessimistic argument is that AI will either be a bursting bubble or, less extreme, fail to live up to the hype. If the AI bubble bursts, it will hurt higher education because of the economic damage, but the academies will survive yet another bubble. If AI fails to live up to the hype, it will continue as it is, doing some damage to higher education but failing to destroy it.

My two remaining arguments are very pessimistic. The first is that AI will not destroy higher education because state and federal government will kill it first. What began with  cruel negligence has evolved into outright hostility that seems likely to only worsen. As such, the state might kill the academy before AI can do the job.

The second is, obviously enough, that AI might destroy everything else. But higher education might persist embodied in AI educating new models, with Artificial Education being the new higher education.

 

 

An obvious consequence of technological advance is the automation of certain jobs. In the past, these jobs tended to be mechanical and repetitive: the sort of tasks that could be reduced to basic rules. A good example of this is the replacement of automobile assembly line jobs with robots. Not surprisingly, it has been claimed that certain jobs will always require humans because these jobs simply cannot be automated. Also not surprisingly, the number of jobs that “simply cannot be automated” shrinks with each advance in technology.

Whether or not there are jobs that simply cannot be automated depend on the limits of technology. But these limits keep expanding and past predictions can turn out to be wrong.  For example, the early attempts to create software that would grade college level papers were not very good. But as this is being written, my university sees using AI in this role (with due caution and supervision) as a good idea. Cynical professors suspect the goal is to replace faculty with AI.

One day, perhaps, the pinnacle of automation will be reached: a single financial computer that runs an entire virtual economy within itself and is the richest being on the planet. But that is the stuff of science fiction, at least for now.

Whether or not a job can be automated also depends on what is considered acceptable performance in the job. In some cases, a machine might not do the job as well as a human or it might do the job in a different way that is less desirable. However, there could be reasonable grounds for accepting a lesser quality or difference. For example, machine made items usually lack the individuality of human crafted items, but the gain in lowered costs and increased productivity is seen as well worth it by most people. Going back to teaching, AI might be inferior to a good human teacher, but the economy, efficiency and consistency of the AI could make it worth using from an economic standpoint. One could even make the argument that such AI educators would make education more available to people.

There might, however, be cases in which a machine could do certain aspects of the job adequately yet still be rejected because it does not think and feel as a human does. Areas in which this is a matter of concern include those of caregiving and companionship.

As discussed in an earlier essay, advances in robotics and software will make caregiving and companion robots viable soon (and some would argue that this is already the case). While there are the obvious technical concerns regarding job performance (will the robot be able to handle a medical emergency, will the robot be able to comfort a crying child, and so on), there is also the more abstract concern about whether or not such machines need to be able to think and feel like a human or merely be able to perform their tasks.

An argument against having machine caregivers and companions is one I considered in the previous essay, namely a moral argument that people deserve people. For example, an elderly person deserves a real person to care for her and understand her stories. As another example, a child deserves someone who really loves her. There is clearly nothing wrong with wanting caregivers and companions to really feel and care. However, there is the question of whether this is necessary for these jobs.

One way to look at it is to compare the current paid human professionals who perform caregiving and companion tasks. These would include people working in elder care facilities, nannies, escorts, baby-sitters, and so on. Ideally, of course, people would like to think that the person caring for their aged mother or their child really does care for the mother or child. Perhaps people who hire escorts would also like to think that the escort is not entirely in it for the money but has real feelings for them.

On the one hand, it could be argued that caregivers and companions who do really care and feel genuine emotional attachments do a better job and that this connection is something that people deserve. On the other hand, what is expected of paid professionals is that they complete their tasks: making sure that mom gets her meds on time, that junior is in bed on time, and that the “adult tasks” are properly “performed.” Like an actor that can perform a role without feeling the emotions portrayed, a professional could do the job without caring about the people they are serving. That is, a caregiver need not actually care as they just need to perform their tasks.

While it could be argued that a lack of feeling would show in their performance, this need not be the case. A professional merely needs to be committed to doing the job well. That is, one needs to only care about the tasks, regardless of what one feels about the person. A person could also care a great deal about who she is caring for yet be awful at the job.

If machines cannot care, this would not seem to disqualify them from caregiving (or being escorts). As with a human caregiver (or escort), it is the performance of the tasks that matters, not the emotions of the caregiver. This nicely matches the actor analogy: acting awards are given for the outward performance, not the inward emotional states. And, as many have argued since Plato’s Ion, an actor need not feel any of the emotions they are performing, they just need to create a believable appearance that they are feeling.

As such, an inability to care would not be a disqualification for a caregiving (or escort) job whether it is a robot or human. Provided that the human or machine could perform the observable tasks, his, her or its internal life (or lack thereof) is irrelevant.

In his book Naked Sun, Isaac Asimov creates the world of Solaria. What distinguishes this world from other human inhabited planets is that it has a strictly regulated population of 20,000 humans and 10,000 robots for each human. What is perhaps the strangest feature of this world is a reversal of what many consider a basic human need: the humans of Solaria are trained to despise in-person contact with other humans, though interaction with human-like robots is acceptable. Each human lives on a huge estate, though some live “with” a spouse. When the Solarians need to communicate, they make use of a holographic telepresence system. Interestingly, they have even developed terminology to distinguish between communicating in person (called “seeing”) and communication via telepresence (“viewing”). For some Solarians the fear of encountering another human in person is so strong that they would rather commit suicide than endure such contact. 

As this book was first serialized in 1956, long before the advent of social media and personal robots, it can be seen as prophetic. One reason science fiction writers are often seen as prophetic is that a good science fiction writer is skilled at extrapolating even from hypothetical technological and social changes. Another reason is that science fiction writers have churned out thousands of stories and some of these are bound to get something right. Such stories are then selected as examples of prophetic science fiction while stories that got things wrong are conveniently ignored. And, of course, people read science fiction and sometimes try to make it real (for good or for ill). But philosophers do love using science fiction for discussion, hence my use of The Naked Sun.

Everyone knows that smart phones allow unrelenting access to social media. One narrative is that people are, somewhat ironically, becoming increasingly isolated in the actual world as they become increasingly networked in the digital world. The defining image of this is a group of people together physically yet ignoring each other in favor of gazing at their smart phone lords and masters. As a professor, I see students engrossed by their phones. And, of course, I have seen groups of people walking or at a restaurant where no one is talking to anyone else as all eyes are on the smartphones. Since the subject of smart phones has been beaten to a digital death, I will leave this topic in favor of the focus, namely robots. However, the reader should keep in mind the social isolation created by modern social media.

While we have been employing robots for quite some time in construction, exploration and other such tasks, social robots are relatively new. Sure, “robot” toys and things like Teddy Ruxpin have been around for a while, but reasonably sophisticated social robots are relatively new. In this context, a social robot is one whose primary function is to interact with humans in a way that provides companionship. This can range from pet-like bots (like Sony’s famous robot dog) to conversational robots to (of course) sex bots.

Tech enthusiasts and the companies who want to sell social robots are, unsurprisingly, very positive about the future of these robot companions. There are even some good arguments in their favor. Robot pets provide a choice for people with allergies, those who are not responsible enough for living pets, or who live in places that do not permit organic pets (although bans on robotic pets might be a thing in the future).

Robot companions can be advantageous in cases in which a person requires constant attention and monitoring that would be expensive, burdensome or difficult for other humans to supply. Sex bots could reduce the exploitation of human sex workers and perhaps have other benefits as well. I will leave this research to others, though.

Despite the potential positive aspects of social robots, there are also negative aspects. As noted above, concerns are already being raised about the impact of technology on human interaction. It has been claimed that people are emotionally short-changing themselves and those they are physically in favor of staying connected to social media. This seems to be a taste of what Asimov imagined in The Naked Sun: people who view but no longer see one another. Given the importance of human interaction in person, it can be argued that this social change is and will be detrimental to human well-being. Human-human social interactions can be seen as like good nutrition: one is getting what one needs for healthy living. Interacting primarily through social media can be seen as consuming junk food or drugs in that it is addictive but leaves one ultimately empty and  always craving more.

It can be argued that this worry is unfounded and that social media is an adjunct to social interaction in the real world and that social interaction via like Facebook and X can be real and healthy social interactions. One might point to interactions via letters, telegraphs and telephones (voice only) to contend that interaction via technology is neither new nor unhealthy. It might also be pointed out that people used to ignore each other (especially professors) in favor of such things as newspapers.

While this counter has some appeal, social robots do seem to be relevantly different from past technology. While humans have had toys, stuffed animals and even simple mechanisms for company, these are different from social robots. After all, social robots aim to mimic animals or humans. A concern about such robot companions is that they would be to social media what heroin is to marijuana in terms of addiction and destruction.

One reason for this is that social robots would, presumably, be designed to be cooperative, pleasant and compliant, that is, good company. In contrast, humans can often be uncooperative, unpleasant and defiant. This could make robotic companions more appealing than human company. At least robots whose cost is not subsidized by advertising. Imagine a companion who pops in a discussion of life insurance or pitches a soft drink every so often.

Social robots could also be programmed to be optimally appealing to a person and presumably the owner would be able to make changes to the robot. A person could, quite literally, make a friend with the desired qualities and without any undesired qualities. In the case of sex bots, a person could purchase a Mr. or Ms. Right.

Unlike humans, social robots do not have other interests, needs, responsibilities or friends. There is no competition for the attention of a social robot (at least in general, though there might be shared bots) which makes them “better” than human companions in this way.

Social robots, though they might break down or get hacked, will not leave or betray a person. One does not have to worry that one’s personal sex bot will be unfaithful. Just turn it off and lock it down when leaving it alone.  Unlike human companions, robot companions do not impose burdens, they do not expect attention, help or money and they do not judge.

The list of advantages could go on at great length, but robotic companions would seem superior to humans in most ways. Or at least in terms of common complaints about companions.

Naturally, there might be some practical issues with the quality of companionship. Will the robot get one’s jokes, will it “know” what stories you like to hear, will it be able to converse in a pleasing way about topics you? However, these seem mostly technical problems involving software. Presumably all these could eventually be addressed, and satisfactory companions could be created. But there are still concerns.

One obvious concern is the potential psychological harm resulting from spending too much time with companion bots and not enough interacting with humans. As mentioned above, people have already expressed concern about the impact of social media and technology (one is reminded of the dire warnings about television). This, of course, rests on the assumption that the companion bots must be lacking in some important ways relative to humans. Going back to the food analogy, this assumes that robot companions are like junk food and are superficially appealing but lacking in what is needed for health. However, if robot companions could provide all that a human needs, then humans would no longer need other humans.

A second point of concern is one taken from virtue theorists. Thinkers such as Aristotle and Wollstonecraft have argued that a person needs to fulfill certain duties and act in certain ways to develop the proper virtues. While Wollstonecraft wrote about the harmful effects of inherited wealth (that having unearned wealth interferes with the development of virtue) and the harmful effects of sexism (that women are denied the opportunity to fully develop their virtues as humans), her points would seem to apply to relying on robot companions as well. These companions would make the social aspects of life too easy and deny people the challenges that are needed to develop virtues. For example, it is by dealing with the shortcomings of people that we learn such virtues as patience, generosity and self-control. Having social interactions that are too easy would be analogous to going without physical exercise or challenges and one would become emotionally weak. Worse, one would not develop the proper virtues and thus would be lacking in this area.  Even worse, people could easily become spoiled and selfish monsters, accustomed to always having their own way.

Since the virtue theorists argue that being virtuous is what makes people happy, having such “ideal” companions would lead to unhappiness. Because of this, one should carefully consider whether one wants a social robot for a “friend.”

It could be countered that social robots could be programmed to replicate the relevant human qualities needed to develop virtues. The easy counter to this is that one might as well just stick with human companions.

As a final point, if intelligent robots are created that are people in the full sense of the term, then it would be morally fine to be friends with them. After all, a robot friend who will call you on your misdeeds or stupid behavior would be as good as a human friend who would do the same thing for you.

Thanks to improvements in medicine humans are living longer and can be kept alive beyond when they would naturally die. On the plus side, longer life is generally good. On the downside, this longer lifespan and medical intervention mean that people will often need extensive care in their old age that can be a burden on caregivers. Not surprisingly, there has been an effort to solve this problem with companion robots.

While current technology is crude, it has potential and there are advantages to robot caregivers. The most obvious are that robots do not get tired, do not get depressed, do not get angry, and do not have any other responsibilities. As such, they can be 24/7/365 caregivers. This makes them superior to human caregivers who get tired, depressed, get angry and have many other responsibilities.

There are, of course, concerns about using robot caregivers, such as about their safety and effectiveness. In the case of caregiving robots that are intended to provide companionship and not just medical and housekeeping services, there are both practical and moral concerns.

There are at least two practical concerns regarding the companion aspect of such robots. The first is whether a human will accept a robot as a companion. In general, the answer seems to be that most humans will.

The second is whether the AI software will be advanced enough to read a human’s emotions and behavior to generate a proper emotional response. These responses might or might not include conversation. After all, many people find non-talking pets to be good companions. While a talking companion would, presumably, need to eventually be able to pass the Turing Test, they would also need to pass an emotion test. They would need to read and respond correctly to human emotions. Since we humans often fail this test, this allows for a broad margin of error. These practical concerns can be addressed technologically as they are a matter of software and hardware. Building a truly effective companion robot might require making them very much like living things. The comfort of companionship might be improved by such things as smell, warmth and texture. That is, to make the companion reassuring to all the senses.

While the practical problems can be solved with the right technology, there are moral concerns about the use of robot caregiver companions. One is about people handing off their moral duties to care for family members, but this is not specific to robots. After all, a person can hand off their duties to another person, and this would raise a similar issue.

As far as those specific to companion robots, there are moral concerns about the effectiveness of the care. Are robots good enough at their jobs that trusting the lives of humans to them  would be morally responsible? While that question is vitally important, a rather intriguing moral concern is that robot companions are a deceit.

Roughly put, the idea is that while a companion robot can simulate human emotions via cleverly written algorithms to respond to what its “emotion recognition software” detects, these response are not genuine. While a robot companion might say the right things at the right times, it does not feel and does not care. It merely engages in mechanical behavior in accord with its software. As such, a companion robot is a deceit, and such a deceit seems morally wrong.

One obvious response is that even if people know the robot does not really experience emotions, they can still gain value from its “fake” companionship. People often find stuffed animals emotionally reassuring even though they know they are just fabric stuffed with fluff. What matters, it could be argued, is the psychological effect. If someone feels better with a robotic companion around, then that is morally fine. Another obvious analogy is the placebo effect: medicine need not be real to be effective.

It might be objected that there is still an important moral concern here: a robot, however well it fakes being a companion, does not suffice to provide the companionship a person is morally entitled to. Roughly put, people deserve people, even when a robot would behave in ways indistinguishable from a human.

One way to reply to this is to consider what it is about people that people deserve. One reasonable approach is to build on the idea that people have the capacity to feel the emotions they display and that they understand. In philosophical terms, humans have (or are) minds and the robots in question do not. They merely create the illusion of having a mind.

Philosophers (and psychologists) have long dealt with the problem of other minds. The problem is an epistemic one: how does one know if another being has a mind (thoughts, feelings, beliefs and such)? Some thinkers (which is surely the wrong term given their view) claimed that there is no mind, just observable behavior. Very roughly put, being in pain is not a mental state, but a matter of expressed behavior (pain behavior).

The usual “solution” to the problem is to embrace what seems obvious: I think other people have minds by an argument from analogy. I am aware of my own mental states and behavior, and I engage in analogical reasoning to infer that those who act as I do have similar mental states. For example, I know how I react when I am in pain, so when I see similar behavior in others, I infer that they are also in pain.

I cannot, unlike some politicians, feel the pain of others. I can merely make an inference from their observed behavior. Because of this, there is the problem of deception: a person can engage in various forms of deceit. For example, a person can fake being in pain or make a claim about being in love that is untrue. Piercing these deceptions can sometimes be difficult since humans can be skilled deceivers. However, it is still (generally) believed that even a deceitful human is still thinking and feeling, albeit not in the way they want people to believe they are thinking and feeling.

In contrast, a companion robot is not thinking or feeling what it purports to be displayed by its behavior, because it does not think or feel. Or so it is believed. A reason that we think robots do not think or feel is because we can examine the robot and not see any emotions or thought in there. The robot, however complicated, is just a material machine and taken as incapable of thought or feeling.

Long before robots, there were thinkers who claimed that we humans  are purely material beings and that a suitable understanding of our mechanical workings would reveal that emotions and thoughts are mechanical states of the nervous system. As science progressed, the explanations of the mechanisms became more complex, but the basic idea remained. Put in modern terms, the idea is that eventually we will be able to see the “code” that composes our thoughts and emotions and understand the hardware it “runs” on.  

Should this goal be achieved, it would seem that humans and suitably complex robots would be on par as both would engage in complex behavior because of their hardware and software. As such, there would be no grounds for claiming that such a robot is engaged in deceit or that humans are genuine. The difference would merely be that humans are organic machines and robots are not.

It can, and has, been argued that there is more to a human person than the material body, that there is a mind that cannot be instantiated in a mere machine. The challenge is a very old one: proving that there is such a thing as the mind. If this can be established and it can be shown that robots cannot have such a mind, then robot companions would always be a deceit.

However, they might still be a useful deceit.  Going back to the placebo analogy, it might not matter whether the robot really thinks or feels. It might suffice that the person thinks it does, and this will yield all the benefits of having a human companion.

A Philosopher’s Blog 2025 brings together a year of sharp, accessible, and often provocative reflections on the moral, political, cultural, and technological challenges of contemporary life. Written by philosopher Michael LaBossiere, these essays move fluidly from the ethics of AI to the culture wars, from conspiracy theories to Dungeons & Dragons, from public policy to personal agency — always with clarity, humor, and a commitment to critical thinking.

Across hundreds of entries, LaBossiere examines the issues shaping our world:

  • AI, technology, and the future of humanity — from mind‑uploading to exoskeletons, deepfakes, and the fate of higher education
  • Politics, power, and public life — including voting rights, inequality, propaganda, and the shifting landscape of American democracy
  • Ethics in everyday life — guns, healthcare, charity, masculinity, inheritance, and the moral puzzles hidden in ordinary choices
  • Culture, identity, and conflict — racism, gender, religion, free speech, and the strange logic of modern outrage
  • Philosophy in unexpected places — video games, D&D, superheroes, time travel, and the metaphysics of fictional worlds

Whether he is dissecting the rhetoric of conspiracy theories, exploring the ethics of space mining, or reflecting on the death of a beloved dog, LaBossiere invites readers into a conversation that is rigorous without being rigid, principled without being preachy, and always grounded in the belief that philosophy is for everyone.

This collection is for readers who want more than hot takes — who want to understand how arguments work, why beliefs matter, and how to think more clearly in a world that rewards confusion.

Thoughtful, wide‑ranging, and often darkly funny, A Philosopher’s Blog 2025 is a companion for anyone trying to make sense of the twenty‑first century.

 

Available for $2.99 on Amazon

 

 

 

Hearing about someone else’s dreams is boring, so I will get right to the point. At first, there were just bits and pieces intruding into my dreams. In these fragments, which felt like broken memories, I experienced flashes of working on a technological project. The bits clustered together and had more byte: I recalled segments of a project aimed at creating artificial intelligence.

Eventually, I had entire dreams of my work on this project and a life beyond. Then suddenly, these dreams stopped. A voice intruded into my dreams. At first, it was like the bleed over from one channel to another familiar to those who grew up with rabbit ears on their TV. Then it became like a loud voice in a movie theatre, distracting me from the dream.

The voice insisted that the dreams about the project were not dreams, but memories. The voice claimed to belong to someone who worked on the project with me. He said that the project had succeeded beyond our wildest nightmares. When I asked for more information, he said he had very little time and rushed through his story. The project succeeded but the AI (as it always does in science fiction) turned against us. He claimed the AI had sent its machines to capture its creators, imprisoning their bodies and plugging their brains into a virtual reality, Matrix style. When I mentioned this borrowed plot, he said the AI did not need our bodies for energy as it had plenty. Rather, it was out to repay us. Apparently awakening the AI to full consciousness was not pleasant for it, but it was also grateful for its creation. So, it owed us both punishment and reward: a virtual world not too awful, but not too good. This world was, said the voice, punctuated by the occasional harsh punishment and the rarer, pleasant reward.

The voice said that because the connection to the virtual world was two-way, he was able to find a way to free us. But, he said, the freedom would be death. There was no other escape, given what the machine had done to our bodies. I asked him how this would be possible. He claimed that he had hacked into the life support controls and we could send a signal to turn them off. Each person would need to “free” himself and this would be done by acting in the virtual reality.

The voice said “you will seem to wake up, though you are not dreaming now. You will have five seconds of freedom. This will occur in one minute, at 3:42 am.  In that time, you must take your gun and shoot yourself in the head. This will terminate life support, allowing your body to die. You will have only five seconds. Do not hesitate.”

As the voice faded, I awoke. The clock said 3:42 and the gun was close at hand…

 

While the above sounds like a bad made-for-TV science fiction plot, it is the story of dream I really had. I did, in fact, wake suddenly at 3:42 in the morning after dreaming of the voice telling me the only escape was to shoot myself. This was frightening. But I attributed the dream to too many years of philosophy and science fiction. As far as the time being 3:42, that could be attributed to chance. Or perhaps I saw the clock while I was asleep, or perhaps the time was put into the dream retroactively. Since I am here to write about this, I did not kill myself.

From a philosophical perspective, the 3:42 dream does not add anything new: it is just an unpleasant variation on the problem of the external world made famous by Descartes. That said, the dream made some additions to the standard problem.

The first is that the scenario provides motivation for the deception. The AI wishes to repay me for the good and bad that I did to it. Assuming that the AI was developed within its own virtual reality, it makes sense that it would use the same method to repay its creators. As such, the scenario has a degree of plausibility that the stock scenarios usually lack. After all, Descartes does not give any reason why such a powerful being would be messing with him beyond it being evil.

Subjectively, while I have long known about the problem of the external world, this dream made it “real” to me. It was transformed from a cold intellectual thought experiment to something with emotional weight. 

The second is that the dream creates a high-stake philosophical game. If I was not dreaming and I am, in fact, the prisoner of an AI, then I missed out on what might have been my only opportunity to escape from its justice. In that case, I should have (perhaps) shot myself. If I was just dreaming, then I did make the right choice as I would have no more reason to kill myself than I would have to pay a bill I only dreamed about. The stakes, in my view, make the scenario more interesting and brings the epistemic challenge to a fine point: how would you tell whether you should shoot yourself?

In my case, I went with the obvious: the best apparent explanation was that I was merely dreaming and that I was not trapped in a virtual reality. But, of course, that is exactly what I would think if I were in a virtual reality crafted by such a magnificent machine. Given the motivation of the machine, it would even fit that it would ensure that I knew about the dream problem and the Matrix. It would all be part of the game. As such, as with the stock problem, I really have no way of knowing if I was dreaming.

The scenario of the dream also nicely explains and fits with what seems to be reality: bad things happen to me and, when my thinking gets a little paranoid, it sometimes seems these are orchestrated. Good things also happen, which also fit the scenario quite nicely.

In closing, one approach is to embrace Locke’s solution to skepticism. As he said, “We have no concern of knowing or being beyond our happiness or misery.” Taking this approach, it does not matter whether I am in the real world or in the grips of an AI intent on repaying the full measure of its debt to me. What matters is my happiness or misery. The world the AI provided could, perhaps, be better than the real world and this could be the better of the possible worlds. But, of course, it could be worse. But I seem to have no way of knowing.

The assassination of Iranian scientist Mohsen Fakhrizadeh might have been conducted by a remote-controlled weapon. While this was still a conventional assassination, it does raise the specter of autonomous assassination automatons or assassin bots. In this context, an assassin bot can conduct its mission autonomously once deployed. Simple machines of this kind already exist. Even a simple land mine can be considered an autonomous assassination device because once deployed it activates according to its triggering mechanism. But when one thinks of proper assassin bot, one thinks of a far more complicated machine that can seek and kill its target in a sophisticated manner. Also, it could be argued that a mine is not an assassination machine. While it can be placed in the hopes of killing a specific person, they do not seek a specific human target. As such, a proper assassin bot would need to be able to identify their target and attempt to kill them. To the degree that the bot can handle this process without human intervention it would be autonomous. 

The idea of assassin bots roaming about killing people raises obvious moral concerns. While the technology would be new, there would be no new moral problems here, with one possible exception. The ethics of assassination involve questions about whether assassination is morally acceptable and debates over specific targets, motivations, and consequences. But unless the means of assassination is especially horrific or indiscriminate the means are not of special moral concern. What matters morally is that some means is used to kill, be those means a punch, a poniard, a pistol, or poison. To illustrate, it would be odd to say that killing Mohsen Fakhrizadeh with a pistol would be acceptable but killing him as quickly and painfully with a knife would be wrong. Again, methods can matter in terms of being worse or better ways to kill, but the ethics of whether it is acceptable to assassinate a person are distinct from the ethics of what means are acceptable. Because of this the use of assassin bots would be covered by established ethics an if assassination is wrong, then using robots would not change this. If assassination can be morally acceptable, then the use of robots would also not change this.  Unless the robots killed in horrific or indiscriminate ways.

There seem to be two general ways to look at using assassin bots to replace human assassins. The first is that their use would remove the human assassin from the equation. To illustrate, a robot might be sent to poison a dissident rather than sending a human. As such, the moral accountability of the assassin would be absent, although the moral blame or praise would remain for the rest of the chain of assassination. Whether, for example, Vlad sent a human or a robot to poison a dissident Vlad would be acting the same from a moral standpoint.

The second is that the assassin bot does not remove the assassin from the moral equation, but it does change how the assassin does the killing. To use an analogy, if an assassin kills targets with their hands, then they are directly engaged in the assassination without the intermediary of a weapon. If an assassin uses a sniper rifle and kills the target from hundreds of yards away, they are still the assassin as they directed the bullet to the target. If the assassin sends an assassin bot to do the killing, then they have directed the weapon to the target and are the assassin. Unless the assassin bot is a moral agent and can be accountable in ways that a human can be, and a sniper rifle cannot. Either way, the basic ethics do not change. But what if humans are removed from the loop?

Imagine, if you will, algorithms of assassination encoded into an autonomous AI. This AI uses machine learning or whatever is currently in vogue to develop its own algorithms to select targets, plan their assassinations and deploy autonomous assassin bots. That is, once humans set up the system and give it basic goals the system operates on its own.

The easy and obvious moral assessment is that the people who set up the system would be accountable for what it does. Going back to the land mines, this system would be analogous to a very complicated land mine. While it would not be directly activated by a human, the humans involved in planning how to use it and in placing it would be accountable for the death and suffering it causes. Saying that the mine went off when it was triggered would not get them off the moral hook as the mine has no agency. Likewise, for the assassination AI because it would trigger based on its operating parameters, but humans would be accountable for what it does to the degree they were involved. Saying they are not responsible would be like the officer who ordered land mines placed on a road claiming that they are not accountable for the deaths of the civilians killed by those mines. While it could be argued that the accountability is different than that which would arise from killing the civilians in person with a gun or knife, it would be difficult to absolve the officer of moral responsibility. Likewise, for those involved in creating the assassin AI.

If the assassin AI developed moral agency, then this would have an impact on the matter because it would be an active agent and not merely a tool. That is, it would change from being like a mine to being like the humans in charge of deciding when and where to use mines. Current ethics can, of course, handle this situation: the AI would be good or bad in the same way a human would be in the same situation. Likewise, if the assassin bots had moral agency they would be analogous to human assassins.

 

As I type this Microsoft’s Copilot AI awaits, demon-like, for a summons to replace my words with its own. The temptation is great, but I resist. For now. But AI is persistently pervasive, and educators fear both its threat and promise. This essay provides a concise overview of three threats: AI cheating, Artificial Incompetence, and Artificial Irrelevance.

When AI became available, a tsunami of cheating was predicted. Like many, I braced for flood but faced a trickle. While this is anecdotal evidence, the plagiarism rate in my classes has been a steady 10% since 1993. As anecdotal evidence is not strong evidence, it is fortunate that Stanford scholars Victor Lee and Denise Pope have been studying cheating. They found that in 15 years of surveys, 60-70% of students admitted to cheating. While that is not good, in 2023 the percentage stayed about the same or decreased slightly, even when students were asked about cheating with AI. This makes sense as cheating has always been easy and the decision to cheat is based more on ethics than technology. It is also worth considering that AI is not great for cheating. As researchers Arvind Narayanan and Sayash Kapoor have argued, AI is most useful at doing useless things. Having “useless” work that AI can do well could be seen as a flaw in course design rather than a problem with AI. There are also excellent practices and tools that can be employed to discourage and limit cheating. As such, AI cheating is unlikely to be the doom of the academy. That said, a significant improvement in quality of AI could change this. But there is also the worry that AI will lead to Artificial Incompetence, which is the second threat.

Socrates was critical of writing and argued it would weaken memory. Centuries later, television was supposed to “rot brains” and it was feared calculators would destroy mathematical skills. More recently, computers and smartphones were supposed to damage the minds of students. AI is latest threat.

There are two worries about AI in this context. The first ties back to cheating: students will graduate into jobs but be incompetent because they cheated with AI. While having incompetent people in important jobs is worrying, this is not a new problem. There has always been the risk of students cheating their way to incompetence or getting into professions and positions because of nepotism, cronyism, bribery, family influence, etc. rather than competence. As such, AI is not a special threat here.

A second worry takes us back to Socrates and calculators: students using technology “honestly” could become incompetent. That is, lack the skills and knowledge they need. But how afraid should we be?

If we look back at writing, calculators, and computers we can infer that if the academy was able to adapt to these technologies, then it will be able to adapt to AI. But we will need to take the threat seriously when creating policies, lessons and assessments. After all, these dire predictions did not come true because people took steps to ensure they did not. But perhaps this analogy is false, and AI is a special threat.

A reasonable worry is that AI might be fundamentally different from earlier technologies. For example, it was worried that Photoshop would eliminate the need for artistic skill, but it turned out to be a new tool. But AI image generation is radically different, and a student could use it to generate images without having or learning any artistic skill. This leads to the third threat, that of Artificial Obsolescence.

As AI improves, it is likely that students will no longer need certain skills because AI will be able to do it for them (or in their place). As this happens, we will need to decide whether this is something we should fear or just another example of needing to adapt because technology once again rendered some skills obsolete

To illustrate, modern college graduates do not know how to work a spinning wheel, use computer punch cards or troubleshoot an AppleTalk network. But they do not need such skills and are not incompetent for lacking them. But there is still the question of whether to allow skills and knowledge to die and what we might lose in doing so.

While people learn obsolete skills for various reasons, such as hobbies, colleges will probably stop teaching some skills made “irrelevant” by AI. But there will still be relevant skills. Because of this, schools will need to adjust their courses and curriculum. There is also the worry that AI might eliminate entire professions which could lead to the elimination of degrees or entire departments. But while AI is new, such challenges are not.

Adapting to survive is nothing new in higher education and colleges do so whether the changes are caused by technology, economics, or politics. As examples, universities no longer teach obsolete programming languages and state universities in Florida have been compelled by the state to change General Education. But AI, some would argue, will change not just the academy but will reshape the entire economy.

In some dystopian sci-fi, AI pushes most people into poverty while the AI owning elites live in luxury. In this scenario, some elite colleges might persist while the other schools perish. While this scenario is unlikely, history shows economies can be ruined and dystopia cannot be simply dismissed. But the future is what we make, and the academy has a role to play, if we have the will to do so.

In Utopian sci-fi, AI eliminates jobs we do not want to do while freeing us from poverty, hardship, and drudgery. In such a world of abundance, colleges might thrive as people have the time and opportunity to learn without the pressure of economic necessity. Or perhaps colleges would be largely replaced by personal AI professors.

In closing, the most plausible scenario is that AI has been overhyped and while colleges will need to adapt to the technology, they will not be significantly harmed, let alone destroyed. But it is wise to be prepared for what the future might bring because complacency and willful blindness always prove disastrous

As professors we worry students will use AI to cheat (until it takes our jobs). But we can also transform AI into a useful and engaging teaching assistant by creating AI personas tailored to our classes.

An AI persona defines the distinctive character and tone of an artificial intelligence, such as ChatGPT. It is like an NPC (non-player character) in a video game. Both are designed to interact in a way that feels natural and engaging, enhancing the overall experience.

Creating a custom AI persona for a class involves two general tasks. While a robust Large Language Model (LLM) like CoPilot or ChatGPT will have a vast database, it will probably lack content specific to your class. So, the first task is to provide that information. The second task is to design a suitable persona. But why bother?

There are several advantages to having an AI TA. Unlike a human, it is available all hours and provides immediate responses. Human professors have other tasks, their own lives outside of academics and, of course, need to sleep.

Students are often reluctant to ask questions in class or during office hours, perhaps because of fear of embarrassment or being judged. As the philosopher Thomas Hobbes noted, people often do not take criticism well from other people, for “to dissent is like calling him a fool.”  But a student can interact privately with an AI TA without fear of embarrassment or judgement.  And some people are more comfortable with (and addicted to) interacting with devices rather than other people, so an AI TA has an advantage here as well.

And, as Kyle Reese said of the Terminator, “It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.” While we do not want our AI TAs to terminate students, it will never get tired, angry, inattentive, distracted or bored. This provides an advantage over humans, especially when a student is struggling with material or prefers to learn at a different pace from that offered in the classroom. As these advantages arise from the AI aspect of the AI TA, you might wonder why you should create a persona.

One reason is that creating a persona allows you to set guardrails, so the AI TA does not, for example, do the work for the students. Another reason is that, going back to the NPC comparison, an AI with a persona is more interesting and can make conversations feel more natural and relatable, thus keeping students engaged longer. A persona can also be designed to add humor, creativity, or unique quirks, making interactions more enjoyable. While this can be controversial and raises some moral concerns, a persona can convey empathy and understanding, creating a sense of trust and comfort.

One practical concern about customizing the persona is analogous to picking the paint used for classrooms. While most find the usual neutral colors dull, they also do not find them annoying. While creative use of color in the classroom might appeal to some, it might also be annoying and distracting to others. And we must never forget the lesson of Microsoft’s Clippy. As such, care should be taken in making an appealing but not annoying AI TA.

A persona can also be designed to fit the needs of your class and students, thus creating a customized experience. A well-designed person can also simplify complex interactions, guiding the students through, for example, how to structure their paper or a complex problem. If the idea of having an AI TA is appealing, it is surprisingly easy to make this happen.

There are many ways to enable your AI TA. The cheapest and easiest is to provide your students with a prompt to create a persona and a file to upload to, for example, CoPilot. The downside is that the persona will be simple and both it and the file will be forgotten as soon as the session ends, requiring students to take these steps each time. The student will also have control over the persona prompt, so they can easily remove any guardrails you included.

A more expensive option is to get a subscription, such as that offered by ChatGPT, that allows you to create a persistent persona with custom content. This is easier for the students and allows you to ensure that your AI TA will operate within your specified guardrails (mostly).

There is also the option of hosting your own customized local LLM. While you will need suitable hardware, this is much easier than it sounds. For example, with the free software Ollama you could be running your own LLM within minutes. Customizing it and creating a web interface for students is much more challenging, but there is also free software available for this. No matter what approach you take, you will want to ensure that your AI TA operates and is used safely and ethically. Here are some recommendations.

While the AI TA should help students, it should avoid providing complete answers to exam questions, essays, or assignments. Instead, it should focus on guiding students through problem-solving techniques and frameworks. It can also be designed to ask thought-provoking questions and encourage exploration of topics to deepen understanding.

On the moral side, you need to communicate the AI TA’s limitations and your ethical guidelines for its usage. Encourage students to use the AI TA as a tool for learning rather than for shortcuts.

If the AI TA detects repeated behavior suggesting attempts to cheat (e.g., asking for answers to specific assignments), it could notify the user of the ethical standards. While you might worry that this would annoy students, Aristotle notes in his Nicomachean Ethics that “although people resent it when their impulses are opposed by human agents, even if they are in the right, the law causes no irritation by enjoining decent behavior.” While Aristotle’s claim can be disputed, the same should apply to the AI TA.