Microsoft’s Copilot AI awaits, demon-like, for my summons so that it might replace my words with its own. The temptation is great, but I resist and persist in relying on my own skills. But some warn that others will lack my resolve, and the academy will be destroyed by a deluge of cheating.
Those profiting from AI, including those selling software promising to detect AI cheating, speak dire warnings of the dangers of AI and how it will surpass humans in skills such as writing and taking tests. Because of this, the regulations written by the creators of AI must become law and academic institutions must subscribe to AI detection tools. And, of course, embrace AI themselves. While AI does present a promise and a threat, there is the question of whether it will destroy the academy as we know it. The first issue I will address is whether AI cheating will “destroy” the academy.
Students, I suspect, have been cheating since the first test and plagiarism has presumably existed since the invention of language. Before the internet, plagiarism and detecting plagiarism involved finding physical copies of works. As computers and the internet were developed, digital plagiarism and detection evolved. For example, many faculty use Turnitin which can detect plagiarism. It seemed that students might have been losing the plagiarism arms race, but it was worried that easy access to AI would turn the battle in favor of the cheating students. After all, AI makes cheating easy, affordable and harder to detect. For example, large language models allow “plagiarism on demand” by generating new text with each prompt. As I write this, Microsoft has made Copilot part of its office subscription and as many colleges provide the office programs to their students, they are handing students tools for cheating. But has AI caused the predicted flood of cheating?
Determining how many students are cheating is like determining how many people are committing crime: you only know how many people have been caught or admitted to it. You do not know how many people are doing it. Because of this, inferences about how many students are cheating need to be made with caution so as to avoid the fallacy of overconfident inference from unknown statistics.
One source of data is Turnitin’s AI detection software. Over the course of a year, the service checked 200 million assignments and flagged AI use in 1 in 10 assignments while 3 in 100 were flagged as mostly AI. These results have remained stable, suggesting that AI cheating is neither a tsunami nor increasing. But this assumes that the AI detection software is accurate. Turnitin claims it has a false positive rate of 1%. But we need to worry about AI detection software generating false positives and false negatives.
For false positives, one concern is that “GPT detectors are biased against non-native English writers.” For false negatives, the worry is that AI detectors can be fooled. As the algorithms used in proprietary detection software are kept secret, we do not know what biases and defects they might have. For educators, the “nightmare” scenario is AI generated work that cannot be detected by software and evades traditional means of proving that cheating has occurred.
While I do worry about the use of AI in cheating, I do not think that AI will significantly increase cheating and that if the academy has survived older methods of cheating, it will survive this new tool. This is because I think that cheating has and will remain consistent. In terms of my anecdotal evidence, I have been a philosophy professor since 1993 and have a consistent plagiarism rate of about 10%. When AI cheating became available, I did not see a spike in cheating. Instead, I saw AI being used by some students in place of traditional methods of cheating. But I must note that this is my experience and that it is possible that AI generated papers are slipping past Turnitin. Fortunately, I do not need to rely on my experience and can avail myself of the work of experts on cheating.
Stanford scholars Victor Lee and Denise Pope have been studying cheating, and past surveys over 15 years showed that 60-70% of students admitted to cheating. In 2023 the percentage stayed about the same or decreased slightly, even when students were asked about using AI. While cheaters might lie about cheating, Pope and Lee use methods to address this challenge. While cheating remains a problem, AI has not increased it and hence reports of the death of the academy are premature. It will, more likely, die by another hand.
This lack of increase makes intuitive sense as cheating has always been easy and the decision to cheat is more a matter of moral and practical judgment rather than being driven by technology. While technology provides new means of cheating, a student must be willing to cheat, and that percentage seems stable. But it is worth considering that there might have been a wave of AI cheating but for efforts to counter it, to not consider this would be to fall for the prediction fallacy.
It is also worth considering that AI has not lived up to the hype because it is not a great tool for cheating. As Arvind Narayanan and Sayash Kapoor have argued, AI is most useful at doing useless things. To be fair, assignments in higher education can be useless. But if AI is being used to complete useless assignments, this is a problem with the assignments (and the professors) and not AI.
But large language models are a new technology and their long-term impact in cheating needs to be determined. Things could change in ways that do result in the predicted flood and the doom of the academy as we know it. In closing, while AI cheating will probably not destroy the academy, we should not become complacent. Universities should develop AI policies based on reliable evidence. A good starting point would be collecting data from faculty about the likely extent of AI cheating.
I like your expressions of optimism, however few they may be. My view attempts to be a long one and the predictability of AI cheating outcomes is nearly as unpredictable as chance itself. If, and only if, such cheating does become widespread, society’s loss(es) would soon become exponential. Cheaters who manage to supplant more intelligent scholars could gradually, if not sooner, replace better minds. And obtain positions for which they possess inferior competence, making mistakes from which no one, save the cheaters, will profit. Deficits of ethics and integrity would, likewise, balloon.
This erosion of workforces, potentially worldwide, would adversely affect all sorts of productivity and eventually spark more chaos and confusion than ever before. Many things would go dangerously wrong. On a more positive note, should things not that far south, this gloomy forecast will be a false alarm. Some accounts I have read point out that AI generated products are pretty easily identified…right now. If that remains the reality, my worries around chaos are Chicken Little material only: cheaters will defeat themselves, as they often have and do. Building better mousetraps usually benefits the creativity of mice. Human intelligence is more than equal to the task of surmounting adversity; averting obstacles. An old friend used to say that I thought and worried too much.
I guess he was right about all of that. But, that admitted, there all kinds of ways to cheat. And people so inclined find new ones everyday.