Despite the American myth, upward mobility is limited and most of us will die in the class we were born into. Part of this myth is the often-true story that college helps people move up the economic ladder. My family fits this narrative. My father’s parents did not finish high school as they had to take jobs in a shoe factory to help support their families. My father finished high school, got a master’s degree, taught high school for years and after his first retirement taught mathematics at the college level. My mother also has an M.A. My sister and I went to college, and I ended up getting my PhD and staying forever as a professor. Because of my family story, I support college education for those who want it.

While college has never been cheap, the increase in the cost of higher education has outpaced inflation. The reasons are clear. First, many states have disinvested from public higher education. Some of this leftover from the last time the financial elites burned down the economy, but most of it is politics. Some of this is ideological: Republicans tend to oppose funding public colleges, preferring to channel money into private profits. There is also the practical reason that weakening public education can push students towards for-profit colleges who have lobbied Republicans and Democrats. With less public support, more of the burden falls on students and their families.

Second, there is massive administrative bloat. Some of this bloat is the number of administrators. For example, while there used to be just deans, there are now assistant deans and associate deans. There are also assistant provosts and associate provosts, and an impressive number of vice presidents at many universities.

 Some of the bloat is due to burdens imposed by the state, such as assessment and various education laws. Some of it is due to the obsession with remaking colleges into businesses. In addition to having well-compensated executives, schools now have marketing departments who talk about “the brand.” There is also the tendency of bureaucrats to expand their bureaucracy. Currently, schools have entire cadres of administrators with no direct connection to education. Despite, or perhaps because of, the increased number of administrators, more administrative tasks are assigned to faculty. This can require hiring more people to teach as their teaching time is devoured by administrative work.

In addition to the ever-increasing number of administrators there has also been a significant increase in their salaries, especially at the higher levels. University presidents can have salaries close to a million dollars and bonuses are common. This is also a result of the business model: high pay “management” ruling over lower pay “workers.” While administrators make the tired old arguments that top money is needed to attract top talent from the private sector (usual business), the same arguments rarely apply to faculty and other employees. Presumably because faculty are not as important to the mission of the university as administrators.

Third, there is the cost of facilities and amenities. Some of this expense is reasonable: smart classrooms are more expensive than the traditional classroom. Other luxury items mainly serve to drive up costs.

Since college provides a way to go up the ladder or at least get a strong grip on a rung, it is important to address the problem of high costs. While one solution has been to make colleges “free”, this runs into the obvious problem that there is no such thing as free college. “Free” college just shifts the cost. This shift can, however, be morally and economically justified—but the discussion needs to be honest about who is paying.

A less drastic solution is for states to return to investing in education. This was once seen as a good idea s as money spent on students was returned many times over as taxes and had many non-economic positive returns on the investment. Valuing helping people upwards does run against current trends, which is to funnel money upwards towards those who already have the most money

It would also help if the state reduced some of the imposed administrative burden on colleges. While this would have a negative impact on those employed in these administrative offices, it would help reduce the cost of education. The challenge is, however, sorting out which administrative burdens to lessen. Reducing administrative positions and salaries would also help.

The number of administrators could be brought back to the older ratios of administrators to everyone else and their salaries could be reduced to more closely match those of faculty. While it could be argued that this would cut down on the top talent, there are some obvious responses. One is that education attracts top talent faculty who are willing to work for relatively low salaries compared to what they could get in the private sector. While detractors of professors often think that people teach or engage in research at colleges because they are unable to get jobs in the private sector, most faculty chose the academic life. This is for a variety of reasons, ranging from the love of teaching to the difference in culture between the academy and the corporation (although this difference is shrinking). So, if the administrator’s argument about having to pay top dollar for top talent were good, then faculty would be terrible. Another is that various scandals and problems have shown what these top dollars sometimes buy.

Finally, schools can also cut their spending on facilities and things that are not relevant to their educational mission. There are, of course, other possibilities but these would be a good start to make college more affordable.

Supporters and critics of AI claim it will be taking our jobs. If true, this suggests that AI could eliminate the need for certain skills. While people do persist in learning obsolete skills for various reasons (such as for a hobby), it is likely that colleges would eventually stop teaching these “eliminated” skills. Colleges would, almost certainly, be able to adapt. For example, if AI replaced only a set of programming skills or a limited number of skills in the medical or legal professions, then degree programs would adjust their courses and curriculum. This sort of adaptation is nothing new in higher education and colleges have been adapting to changes since the beginning of higher education, whether these changes are caused by technology or politics. As examples, universities usually do not teach obsolete programming languages and state schools change their curriculum in response to changes imposed by state legislatures.  

If AI fulfils its promise (or threat) of replacing entire professions, then this could eliminate college programs aimed at educating humans for those professions. Such eliminations would have a significant impact on colleges and could result in the elimination of degrees and perhaps even entire departments. But there is the question of whether AI will be successful enough to eliminate entire professions. While AI might be able to eliminate some programming jobs or legal jobs, it seems unlikely that it will be able to eliminate the professions of computer programmer or lawyer. But it might be able to change these professions so much that colleges are impacted. For example, if AI radically reduces the number of programmers or lawyers needed, then some colleges might be forced to eliminate departments and degrees because there will not be enough students to sustain them.

These scenarios are not mutually exclusive, and AI could eliminate some jobs in a profession without eliminating the entire profession while it also eliminates some professions entirely. While this could have a significant impact on colleges, many of them would survive these changes. Human students would, if they could still afford college in this new AI economy, presumably switch to other majors and professions. If new jobs and professions become available, then colleges could adapt to these, offering new degrees and courses. But if AI, as some fear, eliminates significantly more jobs than it creates, then this would be detrimental to both workers and colleges as it makes them increasingly irrelevant to the economy.

In dystopian sci-fi economic scenarios, AI eliminates so many jobs that most humans are forced to live in poverty while the AI owning elites live in luxury. If this scenario comes to pass, some elite colleges might continue to exist while most others would be eliminated because of the lack of students. While this scenario is unlikely, history shows that economies can be ruined and hence the dystopian scenario cannot be simply dismissed.

In utopian sci-fi economic scenarios, AI eliminates jobs that people do not want to do while also freeing humans from poverty, hardship, and drudgery. In such a world of abundance, colleges would most likely thrive as people would have the time and opportunity to learn without the pressure of economic necessity. Or perhaps colleges would be largely replaced by personal AI professors.

 But it is also worth considering that this utopia might devolve into a dystopia in which humans slide into sloth (such as in Wall-E) or are otherwise harmed by having machines do everything for them, which is something Issac Asimov and other sci-fi writers have considered.

In closing, the most plausible scenario is that AI has been overhyped and while colleges will need to adapt to the technology, they will not be significantly harmed, let alone destroyed. But it is wise to be prepared for what the future might bring because complacency and willful blindness would prove disastrous for the academy.

Socrates, it is claimed, was critical of writing and argued that it would weaken memory. Many centuries later, it was worried that television would “rot brains” and that calculators would destroy people’s ability to do math. More recently, computers, the internet, tablets, and smartphones were supposed to damage the minds of students. The latest worry is that AI will destroy the academy by destroying the minds of students.

There are two main worries about the negative impact of AI in this context. The first ties back to concerns about cheating: students will graduate and get jobs but be ignorant and incompetent because they used AI to cheat their way through school. For example, we could imagine an incompetent doctor who completed medical school only through their use of AI. This person would present a danger to their patients and could cause considerable harm up to and including death. As other examples, we could imagine engineers and lawyers who cheated their way to a degree with AI and are now dangerously incompetent. The engineers design flawed planes that crash, and the lawyers fail their clients, who end up in jail. And so on, for all other relevant professions.

While having incompetent people in professions is worrisome, this is not a new problem created by AI. While AI does provide a new way to cheat, cheating has always been a problem in higher education. And, as discussed in the previous essay, AI does not seem to have significantly increased cheating. As such, we can probably expect the level of incompetency resulting from cheating to remain relatively stable, despite the presence of AI. It is also worth mentioning that incompetent people often end up in positions and professions where they can do serious harm not because they engaged in academic cheating, but because of nepotism, cronyism, bribery, and influence. It is unlikely that AI will impact these factors and concerns about incompetence would be better focused on matters other than AI cheating.

The second worry takes us back to Socrates and calculators. This is the worry that students using technology “honestly” will make themselves weaker or even incompetent. In this scenario, the students would not be cheating their way to incompetence. Instead, they would be using AI in accordance with school policies and this would have deleterious consequences on their abilities.

A well-worn reply to this worry is to point to the examples at the beginning of this essay, such as writing and calculators, and infer that because the academy was able to adapt to these earlier technologies it will be able to adapt to AI. On this view, AI will not prevent students from developing adequate competence to do their jobs and it will not weaken their faculties. But this will require that universities adapt effectively, otherwise there might be problems.

A counter to this view is to argue that AI is different from these earlier technologies. For example, when Photoshop was created, some people worried that it would be detrimental to artistic skills by making creating and editing images too easy. But while Photoshop had a significant impact, it did not eliminate the need for skill and the more extreme of the feared consequences did not come to pass. But AI image generation, one might argue, brought these fears fully to life. When properly prompted, AI can generate images of good enough quality that human artists worry about their jobs. One could argue that AI will be able to do this (or is already doing this) broadly and students will no longer need to develop these skills, because AI will be able to do it for them (or in their place). But is this something we should fear, or just another example of technology rendering skills obsolete?

Most college graduates in the United States could not make a spear, hunt a deer and then preserve the meat without refrigeration and transform the hide into clean and comfortable clothing. While these were once essential skills for our ancestors, we would not consider college graduates weak or incompetent because they lack these skills.  Turning to more recent examples, modern college graduates would not know how to use computer punch cards or troubleshoot an AppleTalk network. But they do not need such skills, and they would not be considered incompetent for lacking them. If AI persists and fulfills some of its promise, it would be surprising if it did not render some skills obsolete. But, as always, there is the question of whether we should allow skills and knowledge to become obsolete and what we might lose if we do so.

Microsoft’s Copilot AI awaits, demon-like, for my summons so that it might replace my words with its own. The temptation is great, but I resist and persist in relying on my own skills. But some warn that others will lack my resolve, and the academy will be destroyed by a deluge of cheating.

Those profiting from AI, including those selling software promising to detect AI cheating, speak dire warnings of the dangers of AI and how it will surpass humans in skills such as writing and taking tests. Because of this, the regulations written by the creators of AI must become law and academic institutions must subscribe to AI detection tools. And, of course, embrace AI themselves. While AI does present a promise and a threat, there is the question of whether it will destroy the academy as we know it. The first issue I will address is whether AI cheating will “destroy” the academy.

Students, I suspect, have been cheating since the first test and plagiarism has presumably existed since the invention of language. Before the internet, plagiarism and detecting plagiarism involved finding physical copies of works. As computers and the internet were developed, digital plagiarism and detection evolved. For example, many faculty use Turnitin which can detect plagiarism. It seemed that students might have been losing the plagiarism arms race, but it was worried that easy access to AI would turn the battle in favor of the cheating students.  After all, AI makes cheating easy, affordable and harder to detect.  For example, large language models allow “plagiarism on demand” by generating new text with each prompt. As I write this, Microsoft has made Copilot part of its office subscription and as many colleges provide the office programs to their students, they are handing students tools for cheating. But has AI caused the predicted flood of cheating?

Determining how many students are cheating is like determining how many people are committing crime: you only know how many people have been caught or admitted to it. You do not know how many people are  doing it. Because of this, inferences about how many students are cheating need to be made with caution so as to  avoid the fallacy of overconfident inference from unknown statistics.

One source of data is Turnitin’s AI detection software. Over the course of a year, the service checked 200 million assignments and flagged AI use in 1 in 10 assignments while 3 in 100 were flagged as mostly AI. These results have remained stable, suggesting that AI cheating is neither a tsunami nor increasing. But this assumes that the AI detection software is accurate. Turnitin claims it has a false positive rate of 1%. But we need to worry about AI detection software generating false positives and false negatives.

For false positives, one concern is that  “GPT detectors are biased against non-native English writers.” For false negatives, the worry is that AI detectors can be fooled. As the algorithms used in proprietary detection software are kept secret, we do not know what biases and defects they might have. For educators, the “nightmare” scenario is AI generated work that cannot be detected by software and evades traditional means of proving that cheating has occurred.

While I do worry about the use of AI in cheating, I do not think that AI will significantly increase cheating and that if the academy has survived older methods of cheating, it will survive this new tool. This is because I think that cheating has and will remain consistent. In terms of my anecdotal evidence, I have been a philosophy professor since 1993 and have a consistent plagiarism rate of about 10%. When AI cheating became available, I did not see a spike in cheating. Instead, I saw AI being used by some students in place of traditional methods of cheating. But I must note that this is my experience and that it is possible that AI generated papers are slipping past Turnitin. Fortunately, I do not need to rely on my experience and can avail myself of the work of experts on cheating.

Stanford scholars Victor Lee and Denise Pope have been studying cheating, and past surveys over 15 years showed that 60-70% of students admitted to cheating. In 2023 the percentage stayed about the same or decreased slightly, even when students were asked about using AI. While cheaters might lie about cheating, Pope and Lee use methods to address this challenge. While cheating remains a problem, AI has not increased it and hence reports of the death of the academy are premature. It will, more likely, die by another hand.

This lack of increase makes intuitive sense as cheating has always been easy and the decision to cheat is more a matter of moral and practical judgment rather than being driven by technology. While technology provides new means of cheating, a student must be willing to cheat, and that percentage seems stable. But it is worth considering that there might have been a wave of AI cheating but for efforts to counter it, to not consider this would be to fall for the prediction fallacy.

It is also worth considering that AI has not lived up to the hype because it is not a great tool for cheating. As Arvind Narayanan and Sayash Kapoor have argued, AI is most useful at doing useless things. To be fair, assignments in higher education can be useless. But if AI is being used to complete useless assignments, this is a problem with the assignments (and the professors) and not AI.

 But large language models are a new technology and their long-term impact in cheating needs to be determined. Things could change in ways that do result in the predicted flood and the doom of the academy as we know it. In closing, while AI cheating will probably not destroy the academy, we should not become complacent. Universities should develop AI policies based on reliable evidence. A good starting point would be collecting data from faculty about the likely extent of AI cheating.

While the ideals of higher education are often presented as being above the concerns of mere money, there is nothing inherently wrong with for-profit colleges. Unless, of course, there is something inherently wrong with for-profit businesses in general. So, it should not be assumed that a for-profit college must be bad, ripping students off, or providing useless degrees.  That said, the poor reputation of the for-profit colleges is well earned.

One tempting argument against for-profit colleges is that by being for-profit they must always charge students more or offer less than comparable non-profit colleges.  After all, as the argument could go, a for-profit college would need to do all that a non-profit does and still make a profit on top of that. This would need to be done by charging more or offering less for the same money. However, this need not be the case.

Non-profit and public colleges are now often top-heavy in terms of administrators and administrative salaries. They also spend lavishly on amenities, sports teams and such. These “extras” are all things that a well-run for-profit college could cut while still offering the core service of a college, namely education. For students who do not want the extras or who would rather not help fund the administrators, this can be a win-win scenario: the student gets the education they want for less than they would pay elsewhere and the college owners’ profit by being efficient. This is the dreamworld ideal of capitalism.

Sadly, the actual world is usually a nightmare: for profit schools often turn out as one would expect: predatory and terrible. One reason for this is that they are focused on making as much profit as possible and this consistently leads to the usual bad behavior endemic to the for-profit approach. While regulation is supposed to keep the bad behavior in check, in the last Trump administration Betsy DeVos curtailed oversight of these colleges. As a specific example, her department stopped cooperating with New Jersey on the fraudulent activities of for-profit colleges. Trump’s second administration is likely to be even more permissive. If the state neglects to check bad behavior, then people are limited only by their own values, and it is generally a bad idea to leave important matters up to the conscience. For example, it would be foolish for the state to hand out welfare by trusting everyone and never verifying their claims. Likewise, it would be foolish to allow for-profit colleges to do as they wish without proper oversight.

As should be expected, I have been against the terrible for-profit colleges. I also extend my opposition to terrible non-profits and terrible public colleges: what I am against is the terrible part, not the profit part. As with much bad behavior that harms others, the most plausible solution is to have and enforce laws against that bad behavior. Conservatives who are concerned about welfare fraud are not content to rely on the conscience of the recipients nor are they willing to simply allow an invisible hand to ensure that things work out properly. They, obviously enough, favor the creation and enforcement of laws to prevent people from committing this fraud. By parity of reasoning, for-profit colleges cannot be expected to operate virtuously with only the conscience of their owners as their guide. The invisible hand cannot be counted on to ensure that they do not engage in fraud and other misdeeds. What is needed, obviously enough, is the enforcement of the laws designed to protect taxpayers and students from being defrauded by the unscrupulous.

It could be argued that while the invisible hand and conscience cannot work in the case of, for example, welfare cheats, they work in the context of business. In the case of for-profit schools, one might argue they will fail if they do not behave, and the free market will sort things out. The easy and obvious reply is to agree that the bad colleges do fail, the problem is that they do a lot of damage to the students and taxpayers in the process. This is a bit like arguing that society does not need laws, since eventually vigilantes might take care of thieves and murderers. As Hobbes noted, the state of nature does not work terribly well.

This is not to say that I believe for-profits should be strangled by bureaucracy. Rather, the laws and enforcement need to focus on preventing harm like fraud. If a business model cannot succeed without including fraud and other misdeeds, then there is clearly a problem with that model.

 

While assessment is embedded into the body of education, when it first appeared I thought it would be another fading academic. When it first appeared, a modified version of the classic insult against teachers sprung to mind: “those who can do; those who can’t do teach; those who can’t teach assess.” In those early days, most professors saw assessment as a scam: assessment “experts” getting well-paying positions or consulting gigs and then dumping the tedious work on professors. Wily professors responded by making up assessment data and found no difference between the effectiveness of their fictional data and real data. This was because they were both ineffective. I, like many professors, found myself in brave new world of assessment.

I eventually got dragged into assessment. At the start, I did the assessment paperwork for the Philosophy & Religion unit at my university. In 2004 I was given an eternal assignment to the General Education Assessment Committee (GEAC) and then made a co-chair. This resulted in me being on all the assessment committees. As such, I now have over 20 years of assessment experience.

On the one hand, I retain much of my old skepticism of assessment. Some of it still seems to be a scam and other aspects a waste of time. There is money to be made in this area, money that is taken from other areas of education. Assessment also takes faculty time that could be used for teaching or research. There are also good questions about the effectiveness of assessment, even when it is done sincerely.

On the other hand, my reading of Aristotle and experience shows there is some merit in properly done assessment. The good and proper purpose of assessment is to evaluate the effectiveness of education. This is reasonable—as Aristotle noted in his Nicomachean Ethics, if one aims to become a morally good person, one needs an index of progress. In the case of virtue, Aristotle used pain and pleasure as his measure: if you feel increasing pleasure at doing good and increasing pain at doing wrong, then you are making progress. This indirect measure (to use an assessment term) enables one to assess moral progress. In the case of education, there must also be assessment. Otherwise you don’t know how well you are doing in your role as an educator.

One mantra among the assessment elite is “grades are not assessment.” While this has been challenged, it remains a common belief. To be fair, there is some truth to this. One concern is that grades can include factors irrelevant to assessing the quality of work. Professors sometimes give extra credit that is not based on merit. Factors such as attendance and participation can go into grades. For example, my students can get +5 points added on to a paper grade if they turn the paper in by the +5-bonus deadline. If I used the extra credit grade for assessment, it would not be accurate. However, it is easy to adjust grades so that they serve a legitimate role in assessment. For example, knowing that the +5 bonus papers have a +5 bonus allows me to assess them using the grades by subtracting 5 points. I, of course, assess the papers using rubrics, if only to avoid getting a lecture on why grades are not assessment.

Another concern is that professors can be inconsistent in their grading. For example, the way I grade papers is different from my colleagues because I am a different person with different experiences. A paper I grade as an 84 might be graded as a 79 or even a 90 by a colleague. Part of this can be due to a professor being a harder or easier grader; part of it can be due to different standards. While this is a concern, the same problem applies to “non-grade” assessment. Different assessors will be harder or easier in their assessment. While having a standard rubric can help offset this, the subjectivity remains whether you call it a grade or an assessment. Another approach is to have several faculty assess the same class work. While a good idea, schools rarely compensate faculty for this extra work and assessing the work of multiple classes would be a part time job by itself.

There are also concerns that some faculty are bad at properly grading work and hence their grades are not legitimate assessments. While it is true that some faculty are bad at grading, this is not a problem with grading but a problem with the faculty. Addressing the shortcoming would fix two problems: bad grades and assessment. There is also the fact that people can be just as bad at assessment, especially when people are assigned to assess work outside of their field. For example, if an English professor were asked to assess philosophy papers for critical thinking or an engineering professor were be asked to review biology lab reports for written communication.

In closing, assessment can be ineffective and a waste of resources. But it seems to be a fixed feature in education, although the support and enthusiasm for it seems to be fading. In my adopted state of Florida, the Republican legislature is far more concerned with ideology in education and ensuring that faculty are compelled to teach the right content and forbidden to bring up taboo subjects.

Following their “good guy with a gun” mantra, Republicans often respond to school shootings with proposals to arm teachers. While there is some public support for these proposals, most Americans are not enamored of the idea. Teachers, with some exceptions, tend to oppose these proposals. As a necessary disclaimer, I’ve been shooting since I could hold a gun and shoot it safely.

While people line up on this issue based on their ideology, it should be given an objective evaluation in terms of practicality and morality.

From a practical standpoint, the question is whether arming teachers would make students safer. Under this broad consideration are other practical concerns. For example, an obvious concern is whether an average teacher would be able to engage and defeat a shooter with a reasonable chance of success and survival. School shooters tend to be inexperienced and untrained and a teacher with some training would probably be as skilled as the typical shooter. But school shooters tend to use assault rifles, and this gives them a firepower advantage in terms of range, accuracy, damage and magazine size. This assumes that teachers would be armed with pistols. But some would argue, a pistol is still better than being unarmed.

So, an armed teacher would be objectively better than an unarmed teacher when engaging a shooter. But the engagement would not be like a shootout in a Western, with gunslingers facing each other in an empty street. The engagement would probably take place with students in the area, making it possible that a teacher will miss the shooter and hit students. Even trained professionals often miss pistol shots in an active engagement and a teacher with just basic firearm training will miss more often. This leads to the practical and moral question of whether this engagement would make students safer than not arming teachers. The practical matter is an empirical question: would an armed teacher reduce casualties by either taking out the shooter or keeping their attention and allowing more people to escape? Or would they do more harm by wounding and killing students with missed shots? If teachers are armed, we will be able to collect data on this.

The moral concern is best put in utilitarian terms: if there is a reduction in deaths due to armed teacher intervention, would this outweigh unintended injuries and deaths caused by the teacher? On the face of it, a utilitarian calculation would find the action morally good, provided that the teacher’s actions saved more students than if they had not been armed. However, there is the moral concern about the possibility of teachers unintentionally killing or wounding students. But engaging a shooter would seem to be the right thing to do, even if there are unintentional casualties.

If concerns were limited to the engagement, then this matter would be settled. However, there are obvious worries about what harms might arise from having armed teachers in schools. Their guns will not magically appear in their hands when needed, nor can the guns be safely locked away for use only during an attack. The teachers would need to be carrying their guns all the time. This leads to a host of practical and moral problems.

One problem is accidental discharge. While not common, people do accidentally fire concealed weapons while, for example, digging in their purse for their phone. The risk of accidental death and injury needs to be weighed against the effectiveness of armed teachers. Since each gun is a risk every minute it is present, it is not unreasonable to think that the risk of having armed teachers outweighs the risk of not having armed teachers to respond to a shooter.

Another concern is someone taking a teacher’s gun, such as a student grabbing a gun when a teacher is trying to break up a fight. 23% of shootings in hospitals  involve guns taken from security officers; the same problem would apply to schools. This must also be factored in when assessing the moral and practical aspects of the matter. It would be ironic and awful if a school shooter used a gun taken from a teacher.

There is also the worry an armed teacher will be mistaken for a shooter when the police arrive. In the confusion of an engagement, the police will need to instantly distinguish the good guys with guns from the bad guys with guns. Armed teachers run the risk of being shot by the police or other armed teachers who see the gun but do not recognize their colleague in the heat of the crisis.

One concern that some will see as controversial is the worry that arming teachers will put black and Latino students at greater risk. Because black and Latino students already tend to be treated worse than white students, they will be at greater risk of being shot by teachers. This concern is often coupled with worries about stand-your-ground laws that allow people to use deadly force when they feel threatened. This concern does extend to white students as well; an armed teacher might feel threatened by a white student and pull their gun. It would be terrible and ironic if armed teachers ended up killing students rather than protecting them. While most teachers, like most people, are not inclined towards murder, the possibility of students being wounded or killed by armed teachers must be considered.

Assessing the morality and practicality of arming teachers requires weighing the risks of arming teachers against the benefits of doing so. Based on the above discussion, one advantage of arming teachers is that they will have a somewhat better chance of stopping or slowing down a shooter. Weighed against this are the many disadvantages noted above—disadvantages that include the possibility of teachers and students being wounded or killed by armed teachers.

One rational, but cold, way to approach this matter is to weigh the odds of a school shooting against the odds of people being harmed by arming teachers. While exact calculations of odds are problematic, the odds of a shooting incident in any K-12 school in a year in the United States has been estimated as 1 in 53,925. For high schools, it is 1 in 21,000. For elementary schools, 1 in 141,463. While these calculations can be questioned, school shootings are statistically rare given the number of schools and numbers of students. This does not diminish the awfulness of shootings. But, when coldly weighing the risks of arming teachers, it is critical. This is because arming teachers would be a good idea (practically and morally) if the benefits outweighed the harms. Determining this requires estimating the odds of a shooting, the odds an armed teacher will stop it and the odds of the various harms of arming teachers occurring. If a reasonable calculation shows that arming teachers would create more good than bad, then arming teachers would be a good idea. If not, it would be a bad idea. Perhaps this cold calculation might be countered by an emotional appeal, such as “if only one student is saved by an armed teacher, it would be worth it.” To this, there are two replies. One is that good policy is not determined by emotional appeals but by rational assessment of the facts. The second is an emotional appeal: “would it still be worth it if one student died because of armed teachers? Or two? Or ten?” My view is that arming teachers, given the odds, is a bad idea. However, I am open to evidence and arguments in favor of arming teachers.

 

While Florida Republicans falsely proclaim that Florida is a free state, the legislature and governor are hard at work to limit freedoms they dislike. One costly example of this is a potential $15.6 million contract with Maryland based Trinity Education Group to create a centralized system for reviewing and objecting to instructional materials and books in Florida’s public schools. In higher education, where I work, the state is engaged in an ongoing review of course syllabi and books to ensure conformity with the official ideology and indoctrination goals set by the legislature.

As of this writing, Florida has redistributed $3 million in taxpayer money to Trinity. Given that Florida’s teacher pay is last in the United States, a strong case could be made that the money should have gone to Florida teachers rather than to enrich a Maryland CEO. Florida schools, like most American schools, are also chronically underfunded and if the goal is to improve education, then it would make more sense to spend the money addressing this issue. Given these facts, it might be wondered what Florida is supposed to get in return for these millions and why this is so critical that it must be funded at the expense of educating children.

As might be guessed, this spending is part of Florida’s war on critical race theory, DEI, and woke. There are two reasons being presented as to why this system is necessary. The first is the claim by Sydney Booker that, “The Department firmly believes that parents have the fundamental right to know what materials their child is accessing at school.”  This view is eminently reasonable, and it is difficult to imagine that anyone would object to such a right. But the obvious question is why the state would need to enrich a Maryland CEO for parents to know what school materials their kid is accessing. While it would take a tiny bit of effort, a parent could ask their kid what they are accessing, they can look at the syllabi, they can talk to teachers, and they could take a few minutes to look through the catalog of the school library. That is, there are free and easy ways for a parent to quickly find out what material their kid is accessing. So why is this system needed? This takes us to the second reason.

According to the state, this multimillion-dollar system will ensure the public can access the same information, since “districts are currently making the materials accessible in various formats and platforms.” While this is superficially appealing, a moment’s reflection destroys this justification. Unless a parent has numerous children spread over several school districts, they will only need information from one school district. As such, they only need to be concerned with the one format and one platform used by that district. This reasoning is like justifying spending millions on a statewide database listing what classes each student is taking so that parents can check to see what classes their kids are taking. This would be absurd, as is the wasteful plan for the central system of course materials. This leads to the question of the system’s actual purpose.

As noted above, its first purpose is to fulfil a central goal of Republican education strategy: redistribute public education funds into CEO compensation and private profit.  The second goal, which is obvious from the “justification” given for a centralized system, is to provide a centralized system to enable a few actors to challenge books across the state. Without a centralized system, a person interested in censoring school material would need to put in more effort to determine what every school might be offering as opposed to a parent’s legitimate concern with what their kid’s school is offering. This system is clearly designed to facilitate people like Friedman (a man responsible over 30% of Florida’s book challenges) who have the goal of banningbooks that do match their value system. The state is thus sending up to $15.6 million to a Maryland corporation to make it easier for a few people in Florida to ban books and course material. Whatever one’s political ideology, this should seem like a terrible waste of taxpayer money.

If you are wondering how this got approved, the answer seems to be duplicity. The department told an administrative law judge that the rules implementing the school library statute wouldn’t have regulatory costs. The state then entered the contract with Trinity, which would seem to prove that there were regulatory costs. In response to questions about this, the department replied with a clever bit of sophism: “A statute that results in costs to either the district or to the state is not synonymous with regulatory costs of a rule.” This is like someone getting you to go to a restaurant by saying “it’s free to go with me” and then being hit with a huge bill that is defended by the person saying, “it being free to go with me is not synonymous with getting a free lunch.” You would be right in thinking they had misled you.

In closing, this system sems to serve three awful purposes. The first is to deplete education funding. The second is to redistribute public funds to a Maryland CEO. It is not even enriching one of our own Florida CEOs.  The third is to create a system to make censorship easier for a very few people. But all this lines up with the Republican approach to education and it is working as intended.

 

Students and employers often complain that college does not prepare them for the real world of filling jobs and this complaint has some merit. But what is the real world of jobs like for most workers? Professor David Graeber got considerable media attention when he published his book Bullshit Jobs: A Theory. He claims that millions of people are working jobs they know are meaningless and unnecessary. Researcher Simon Walo decided to test Graeber’s theory and found that his investigation supported Graeber’s view. While Graeber’s view can be debated, it is reasonable to believe that some jobs are BS all the time and all jobs are BS some of the time. Thus, if educators are to prepare students for working in the real world, they must prepare them for the BS of the workplace. AI can prove useful here.

In an optimistic sci-fi view of the future, AI exists to relieve humans of the dreadful four Ds of bad jobs: the Dangerous, the Degrading, the Dirty, and the Dull. In a bright future, general AI would assist, but not replace, humans in creative and scientific endeavors. In dystopian sci-fi views of the AI future, AI enslaves or exterminates humanity. In dystopia lite, a few humans use AI to make life worse for many humans, such as by replacing humans with AI in good and rewarding jobs.  Much of the effort in AI development seems aimed at making this a reality.

As an example, it is feared that AI will put writers and artists out of work, so when the Hollywood writers went on strike, they wanted protection from being replaced by AI. They succeeded in this goal, but there remains a reasonable question about how great the threat of AI is in terms of its being able to replace humans in jobs humans want to do. Fortunately for humans doing creative and meaningful work, AI is not very good at these tasks. As Arvind Narayanan and Sayash Kapoor have argued, AI of this sort seems to be most useful at doing useless things. But this can be useful for workers and educators should train students to use AI to do these useless things. This might seem a bit crazy but makes perfect sense in our economic reality.

Some jobs are useless, and all jobs have useless tasks. Although his view can be challenged, Graeber came up with three categories of useless jobs. His “flunkies” category consists of people paid to make the rich and important look more rich and more important.  This can be expanded to include all decorative minions. “Goons” are people filling positions existing only because a competitor company created similar jobs. Finally, there are the  “box tickers”, which can be refined to cover jobs workers see as useless but also produce work whose absence would have no meaningful effect on the world.

It must be noted that what is perceived as useless is a matter of values and will vary between persons and in different contexts. To use a silly example, imagine the Florida state legislature mandated that all state universities send in a monthly report in the form of a haiku. Each month, someone will need to create and email the haiku. This task seems useless. But imagine that if a school fails to comply, they lose $1 million in funding. This makes the task useful for the school as a means of protecting their funding. Fortunately, AI can easily complete this useless useful task.

As a serious example, suppose a worker must write reports for management based on bullet points given in presentations. Management, of course, never reads the reports and they are thus useless but required by company policy. While a seemingly rational solution is to eliminate the reports, that is not how bureaucracies usually operate in the “real world.” Fortunately, AI can make the worker’s task easier: they can use AI to transform the bullet points into a report and use the saved time for more meaningful tasks (or viewing social media). Management can also use AI to summarize the report into bullet points. While it would seem more rational to eliminate the reports, this is not how the real world usually works. But what should educators do with AI in their classrooms in the context of useless tasks and jobs?

While this will need to vary from class to class, relevant educators should consider a general overview of jobs and task categories in terms of usefulness and the ability of AI to do these jobs and tasks.  Faculty could then identify the likely useless jobs and useless tasks their students will probably do in the real world. They can then consider how these tasks can be done using AI. This will allow them to create lessons and assignments to give students the skills to use AI to complete useless tasks quickly and with minimal effort. This can allow workers to spend more time on useful work, assuming their jobs have any such tasks.

In closing, my focus has been on using AI for useless tasks. Teaching students to use AI for useful tasks is another subject entirely and while not covered here is certainly worthy of consideration. And here is an AI generated haiku:

 

Eighty percent rise

FAMU students excel

In their learning’s ligh

 

When ChatGPT and its competitors became available to students, some warned of an AI apocalypse in education.  This fear mirrored the broader worries about the over-hyped dangers of AI. This is not to deny that AI presents challenges and danger, but we need to have a realistic view of the threats and promises so that rational policies and practices can be implemented.

As a professor and the chair of the General Education Assessment Committee at Florida A&M University I assess the work of my students, and I am involved with the broader task of assessing general education. In both cases a key challenge is determining how much of the work turned in by students is their work. After all, we want to know how our students are performing and not how AI or some unknown writer is performing.

While students have been cheating since the advent of education, it was feared AI would cause a cheating tsunami. This worry seemed sensible since AI makes cheating easy, free and harder to detect.  Large language models allow “plagiarism on demand” by generating new text each time. With the development of software such as Turnitin, detecting traditional plagiarism became automated and fast. These tools also identify the sources used in plagiarism, providing professors with reliable evidence. But large language models defeat this method of detection, since they generate original text. Ironically, some faculty now see a 0% plagiarism score on Turnitin as a possible red flag. But has an AI cheating tsunami washed over education?

Determining how many students are cheating is like determining how many people are committing crime: one only knows how many people have been caught and not how many people are doing it. Because of this, caution must be exercised when drawing a conclusion about the extent of cheating otherwise one runs the risk of falling victim to the fallacy of overconfident inference from unknown statistics.

In the case of AI cheating in education, one source of data is Turnitin’s AI detection software. Over the course of a year, the service checked 200 million assignments and flagged AI use in 1 in 10 assignments while 3 in 100 were flagged as mostly AI. These results have remained stable, suggesting that AI cheating is neither a tsunami nor increasing. But this assumes that the AI detection software is accurate.

Turnitin claims it has a false positive rate of 1%. In addition to Turnitin, there are other AI detection services that have been evaluated, with the worst having an accuracy of 38% and the best claimed to have a 90% accuracy. But there are two major problems with the accuracy of existing plagiarism detection software.

The first, as the title of a recent paper notes, “GPT detectors are biased against non-native English writers.” As the authors noted, while AI detectors are nearly perfectly accurate in evaluating essays by U.S. born eighth-graders, they misclassified 61.22% of TOEFL essays written by non-native English students. All seven of the tested detectors incorrectly flagged 18 of the 91 TOEFL essays and 89 of 91 of the essays (97%) were flagged by at least one detector.

The second is that AI detectors can be fooled. The current detectors usually work by evaluating perplexity as a metric. Perplexity, which is a measure of such factors as lexical diversity and grammatical complexity, can be created in AI text by using simple prompt engineering. For example, a student could prompt ChatGPT to rewrite the text using more literary language. There is also a concern that the algorithms used in proprietary detection software will be kept secret, so it will be difficult to determine what biases and defects they might have.

Because of these problems, educators should be cautious when using such software to evaluate student work. This is especially true in cases in which a student is assigned a failing grade or even accused of academic misconduct because they are suspected of using AI. In the case of traditional cheating, a professor could have clear evidence in the form of copied text. In the case of AI detection, the professor only has the evaluation of software whose inner workings are most likely not available for examination and whose true accuracy remains unknown. Because of this, educational institutes need to develop rational guidelines for best practices when using AI detection software. But the question remains as to how likely it is that students will engage in cheating now that ChatGPT and its ilk are readily available.

Stanford scholars Victor Lee and Denise Pope have been studying cheating, and past surveys over 15 years showed that 60-70% of students admitted to cheating. In 2023 the percentage stayed about the same or decreased slightly, even when students were asked about using AI. While there is the concern that cheaters would lie about cheating, Pope and Lee use anonymous surveys and take care in designing the survey questions. While cheating remains a problem, AI has not increased it, and the feared tsunami seems to have died far offshore.

This does make sense in that cheating has always been relatively easy, and the decision to cheat is more a matter of moral and practical judgment rather than based on the available technology. While technology can provide new means of cheating, a student must still be willing to cheat, and that percentage seems to be relatively stable in the face of changing technology.  That said, large language models are a new technology and their long-term impact in cheating is something that needs to be determined. But, so far, the doomsayers predictions have not come true. Fairness requires acknowledging that this might be because educators took effective action to prevent this; it would be poor reasoning to fall for the prediction fallacy.

As a final point of discussion, it is worth considering that  perhaps AI has not resulted in a surge in cheating because it is not a great tool for this. As Arvind Narayanan and Sayash Kapoor have argued, AI seems to be most useful at doing useless things. To be fair, assignments in higher education can be useless things of the type AI is good at doing. But if AI is being used to complete useless assignments, then this is a problem with the assignments (and the professors) and not AI.

In closing, while there is also the concern that AI will get better at cheating or that as students grow up with AI, they will be more inclined to use it to cheat. And, of course, it is worth considering whether such use should be considered cheating or if it is time to retire some types of assignments and change our approach to education as, for example, we did when calculators were accepted.

 

For more on cyber policy issues: Hewlett Foundation Cyber Policy Institute (famu.edu)