As I type this Microsoft’s Copilot AI awaits, demon-like, for a summons to replace my words with its own. The temptation is great, but I resist. For now. But AI is persistently pervasive, and educators fear both its threat and promise. This essay provides a concise overview of three threats: AI cheating, Artificial Incompetence, and Artificial Irrelevance.
When AI became available, a tsunami of cheating was predicted. Like many, I braced for flood but faced a trickle. While this is anecdotal evidence, the plagiarism rate in my classes has been a steady 10% since 1993. As anecdotal evidence is not strong evidence, it is fortunate that Stanford scholars Victor Lee and Denise Pope have been studying cheating. They found that in 15 years of surveys, 60-70% of students admitted to cheating. While that is not good, in 2023 the percentage stayed about the same or decreased slightly, even when students were asked about cheating with AI. This makes sense as cheating has always been easy and the decision to cheat is based more on ethics than technology. It is also worth considering that AI is not great for cheating. As researchers Arvind Narayanan and Sayash Kapoor have argued, AI is most useful at doing useless things. Having “useless” work that AI can do well could be seen as a flaw in course design rather than a problem with AI. There are also excellent practices and tools that can be employed to discourage and limit cheating. As such, AI cheating is unlikely to be the doom of the academy. That said, a significant improvement in quality of AI could change this. But there is also the worry that AI will lead to Artificial Incompetence, which is the second threat.
Socrates was critical of writing and argued it would weaken memory. Centuries later, television was supposed to “rot brains” and it was feared calculators would destroy mathematical skills. More recently, computers and smartphones were supposed to damage the minds of students. AI is latest threat.
There are two worries about AI in this context. The first ties back to cheating: students will graduate into jobs but be incompetent because they cheated with AI. While having incompetent people in important jobs is worrying, this is not a new problem. There has always been the risk of students cheating their way to incompetence or getting into professions and positions because of nepotism, cronyism, bribery, family influence, etc. rather than competence. As such, AI is not a special threat here.
A second worry takes us back to Socrates and calculators: students using technology “honestly” could become incompetent. That is, lack the skills and knowledge they need. But how afraid should we be?
If we look back at writing, calculators, and computers we can infer that if the academy was able to adapt to these technologies, then it will be able to adapt to AI. But we will need to take the threat seriously when creating policies, lessons and assessments. After all, these dire predictions did not come true because people took steps to ensure they did not. But perhaps this analogy is false, and AI is a special threat.
A reasonable worry is that AI might be fundamentally different from earlier technologies. For example, it was worried that Photoshop would eliminate the need for artistic skill, but it turned out to be a new tool. But AI image generation is radically different, and a student could use it to generate images without having or learning any artistic skill. This leads to the third threat, that of Artificial Obsolescence.
As AI improves, it is likely that students will no longer need certain skills because AI will be able to do it for them (or in their place). As this happens, we will need to decide whether this is something we should fear or just another example of needing to adapt because technology once again rendered some skills obsolete
To illustrate, modern college graduates do not know how to work a spinning wheel, use computer punch cards or troubleshoot an AppleTalk network. But they do not need such skills and are not incompetent for lacking them. But there is still the question of whether to allow skills and knowledge to die and what we might lose in doing so.
While people learn obsolete skills for various reasons, such as hobbies, colleges will probably stop teaching some skills made “irrelevant” by AI. But there will still be relevant skills. Because of this, schools will need to adjust their courses and curriculum. There is also the worry that AI might eliminate entire professions which could lead to the elimination of degrees or entire departments. But while AI is new, such challenges are not.
Adapting to survive is nothing new in higher education and colleges do so whether the changes are caused by technology, economics, or politics. As examples, universities no longer teach obsolete programming languages and state universities in Florida have been compelled by the state to change General Education. But AI, some would argue, will change not just the academy but will reshape the entire economy.
In some dystopian sci-fi, AI pushes most people into poverty while the AI owning elites live in luxury. In this scenario, some elite colleges might persist while the other schools perish. While this scenario is unlikely, history shows economies can be ruined and dystopia cannot be simply dismissed. But the future is what we make, and the academy has a role to play, if we have the will to do so.
In Utopian sci-fi, AI eliminates jobs we do not want to do while freeing us from poverty, hardship, and drudgery. In such a world of abundance, colleges might thrive as people have the time and opportunity to learn without the pressure of economic necessity. Or perhaps colleges would be largely replaced by personal AI professors.
In closing, the most plausible scenario is that AI has been overhyped and while colleges will need to adapt to the technology, they will not be significantly harmed, let alone destroyed. But it is wise to be prepared for what the future might bring because complacency and willful blindness always prove disastrous