A well-established rhetorical tactic is to falsely accuse someone (such as transpeople) or something (such as television) of being a danger to children. Because humans tend to feel very protective of children, this accusation can easily create feelings of anger and fear that override rational assessment. Such accusations are even more effective when the target is someone or something the intended audience already fears or dislikes. For example, homosexuals have long been accused of being pedophiles.

One recent development has been the Republican strategy of accusing their opponents of pedophilia despite a lack of evidence. This typically takes the form of accusing them of being groomers. Transpeople and drag queens are among the most recent targets being accused of being pedophiles, despite the lack of evidence that these people are any more likely to be pedophiles than anyone else.

My adopted state of Florida has been on the frontlines of the culture wars, attacking LGBT+ people and going after anything labeled as “woke”, ‘DEI” or “critical race theory.” As would be expected, this has involved false claims about pedophiles and groomers. For example, it was claimed that books refereeing to LGBT+ people were corrupting or being used to groom children. But who is endangering the children?

It should not be surprising that marginalized people are not likely to be pedophiles or groomers. After all, pedophiles and groomers often rely on positions of power or authority to gain access to their victims and shield themselves from both suspicion and consequences. The Catholic Church provides an excellent illustration of this and has an entire Wikipedia page on is sexual abuse cases. As a single example, in 2024 the Archdiocese of Los Angeles agreed to pay $880 million to settle sex abuse claims, bringing the current total to over $1.5 billion. Sexual abuse of minors is also a problem in other churches. For example, male pastors from Texas are at least eight times more likely to sexually assault minors than drag queens. While people might be inclined to doubt these claims, they can be confirmed. For example, a skeptic can investigate the Los Angeles settlement and review the documentation. One can also scour police reports nationwide to search for examples of transpeople and drag queens who are pedophiles or groomers, but this will turn up little or nothing.

My adopted state of Florida, which seems to be pushing towards theocratic authoritarianism, passed a law opening schools to volunteer chaplains. In response, the Satanic Temple announced its intention to send in volunteer chaplains, which seems to have paused the plan until it can be reworked to allow only Christian chaplains into schools. Those who rushed to “protect the children” from transpeople, drag queens and books should have opposed this proposal. After all, if the effectively nonexistent threat transpeople and drag queens present warrants such action, then the threat presented by chaplains should be terrifying to them. It is not being claimed that chaplains are likely to be pedophiles, just that statistically they are much more likely to be pedophiles than would a transperson or drag queen. Given the statistical data, it makes more sense to ban chaplains from schools than to go after transpeople, drag queens or books. I must note that I am friends with ministers, chaplains and other religious leaders that are good people. As such, my point is not that we should demonize religious leaders as pedophiles and groomers but that we should not demonize people such as transpeople, drag queens and Democrats. In addition to churches, pedophiles can also be found in police departments.

The Post conducted an analysis of Bowling Green’s The Henry A. Wallace Police Crime Database. To be clear, this is a database of crimes committed by police. This database is, as one would expect, incomplete because it relies on reported arrests and hence is certain to underreport police crimes. From 2005 to 2022 17,700 state and local officers were identified as having been charged with crimes. About 1,800 of these officers were charged with a crime involving child sexual abuse. As per the conservative narrative applied to marginalized groups accused of being groomers, the officers typically spent months befriending and grooming the children before sexually abusing them. As many of these officers were convicted, the court documents can usually be found online, and the Washington Post’s claims can be independently confirmed. Unless, of course, one believes that the police and legal system are themselves involved in a conspiracy to falsely claim that some police have sexually abused children.

Given that Republicans are the self-proclaimed protectors of children and enemies of pedophiles, it might be wondered why they are focusing their efforts against people who are not at likely to be pedophiles while seeming to ignore the sexual predators known to be in police departments. As the Washington Post investigation found, while schools, churches and youth programs usually take special effort to address the risk of child sexual abuse, police departments generally do not do this. It was found that police departments hired people despite their having been accused or convicted of child abuse and other serious crimes. It was also found that officers could resign after being accused of inappropriate behavior with children and move to another department. In some cases, accused officers were reinstated and then eventually convicted of abusing children. Police departments also often fail to notice or act on evidence of inappropriate behavior. I must also note that there are many excellent police officers and  I am glad to know some of them through running, gaming, and my recent experiences as a juror.

If Republicans really cared about protecting the children, they would do something to address this. After all, they have adequate energy to fight imaginary dangers in the culture war. Why not use that to address real threats? While this obviously requires speculation, there seem to be two likely reasons the Republican party does not actually care about protecting the children from real pedophiles and groomers.

The first is that the party is focused on using children as tools for political advantage. Accusing institutionalized authorities such as churches and the police of doing what they are in fact doing would not be politically advantageous. But demonizing their political opponents and marginalized people is advantageous.

The second is that the right seems to accept or even praise “transgressions” by the right type of authority figures. While this sounds horrific, it is not actually being a groomer or pedophile that matters but who is being accused. In line with the claim that “every accusation is a confession”, the right wing media has an established record of defending actual pedophiles and sexualizing children. As this is being written, Matt Gaetz is likely to become Attorney General, despite the existence of credible accusations that he had sex with an underage girl.

In closing, while the right professes to be concerned with protecting children from pedophiles and groomers, the evidence shows this is not true. They are, for the most part, focused on falsely accusing their opponents and marginalized groups while largely ignoring the real danger to the children.

 

While homophobia can still be exploited for political advantage, it is not as effective as it used to be. In response to its diminished political value, the American right embraced transphobia. Before this, most Americans gave little thought to such matters as transpeople competing in sports and bathroom bills. Now that Trump has reclaimed the Whitehouse, it is certain that transphobia will continue to be pushed onto the American people and exploited for political advantage. This will give rise to various moral questions, centered around sports and bathrooms. The right seems extremely interested in bathrooms and who is using them. While one could simply assume that their views are, at best, based on ill-founded but sincere fears, there is the moral question of whether transpeople should be allowed to choose their bathrooms.

One way to approach this moral issue is to consider the matter in utilitarian terms. This requires weighing the likely harms and benefits of allowing or denying this choice. If our overall goal in politics is to serve the good of the people, this is a reasonable approach. It also does not beg any questions as it requires an honest evaluation of the harms and benefits to everyone.

A utilitarian assessment would favor bathroom choice. This is because the two most used arguments against bathroom choice are defeated by the facts. One argument is that allowing bathroom choice would put people in danger. This argument has considerable rhetorical power that derives from fear. After all, the idea that women and girls will be attacked in bathrooms is frightening. But does it have any logical force? Because some states allow bathroom choice, we have data about the danger. Currently, the evidence shows that there is no meaningful danger. As some wits enjoy pointing out, more Republican lawmakers have been arrested for bathroom misconduct than transgender people. As such, those worried about misdeeds in bathrooms should be focusing on the threat presented by Republican lawmakers. The other argument is the privacy argument, which falls apart under analysis.

While some might advance these arguments in good faith, there are those who oppose bathroom choice because they dislike of transgender people and make the “transgender people are icky argument.” This “argument” has no merit on the face of it, which is why it is not advanced as a reason by opponents of bathroom choice. Instead, as noted above, politicians and pundits rely on false claims about danger, using the psychological force of these lies in place of the logical force of evidence. But can a good argument be made for restricting bathroom choice?

A stock problem with utilitarian arguments is they can be used to justify violating rights. This occurs in cases where the benefits received by a numerical majority come at the expense of harm done to a numerical minority. However, it can also arise in cases where the greater benefits to a numerical minority outweigh the lesser harms to a numerical majority. In the case at hand, those opposed to bathroom choice could argue that even if bathroom choice benefits transgender people far more than it harms people who oppose it, the rights of anti-choice people are being violated. This then makes the matter one of competing rights.

In the case of public bathroom facilities, such as student bathrooms at schools, members of the public have the right to use them; that is the nature of public goods. There are, however, reasonable limits placed on access. For example, the public is not allowed to go into a school during normal hours to use the bathrooms. Similarly, the bathrooms in courthouses and government buildings are often not open to anyone to wander off the street and use. So, like all rights, the right to public bathrooms does have limits. It can thus be assumed that transgender people have bathroom rights as do people who oppose bathroom choice. What is in dispute is whether the right of transgender people to choose their bathroom trumps the right of anti-choice people to not be forced to share bathrooms with transgender people. Or whether the right of anti-choice people to force transgender people to use specific bathrooms overrides the right of transgender people.

Disputes over competing rights are often settled by utilitarian considerations, but the utilitarian argument already favors bathroom choice. As such, another approach is needed, and a reasonable one is the consideration of which right has priority. This approach assumes that there is a hierarchy of rights, and that one right can take precedent over another. Fortunately, this is intuitively appealing. For example, while people have a right to free expression, the right to not be unjustly harmed trumps it. This is a reason why libel and slander are not protected by this right.

So, the bathroom issue comes down to this: does the right of a transgender person to choose their bathroom have priority over the right of an anti-choice person to not encounter transgender people in the bathroom? My inclination is that the right of the transgender person has priority over the anti-choice person. To support this, I will use an analogy to race.

Not so long ago, there were separate bathrooms for black and white people. When the end of race segregated bathrooms was proposed, there were dire warnings that terrible things would occur if bathrooms were integrated. Obviously enough, these terrible things did not take place. Whites could have argued that they had a right to not be in the same bathroom as blacks. However, the alleged white right to not be in a bathroom with blacks does not seem to trump the right of blacks to use the bathroom. Likewise, the right of transgender people to choose their bathroom would seem to trump the right of anti-choice people to exclude them.

It can be objected (using a slippery slope approach) that if this argument is taken to its logical conclusion, then bathrooms should be gender neutral. While many will have an emotional reaction to the idea, there is the historical question of why bathrooms were segregated to begin with.  While people might think this has always been the case, the first regulation requiring separate facilities for men and women was not passed in the United States until 1887. The ideological rational behind the separation was the view that women are weaker and needed protection in public spaces. But this separation was not limited to bathrooms as women had their own reading room in public libraries and even their own train cars. But these other gender segregated areas eventually vanished in most of the United States. We do not, for example, have women only seating sections on planes. As such, while it might seem odd to accept gender neutral bathrooms, it was once odd to suggest integrated libraries and transportation, which are now accepted without a thought in most of the United States. One can, of course, raise the danger and privacy arguments against allowing gender neutral bathrooms, but attacking people and harassing them are both already crimes, whether they occur in bathrooms or not. Bathrooms can also be designed to allow privacy (enclosing or replacing urinals and making stalls peep proof). To be fair, it can be argued that a person using the bathroom is more vulnerable to attack and that bathrooms usually only have one exit, making escape more difficult. Given the number of men who seem onboard with “your body, my choice”, women are now even more justified in being wary of men and hence a case can be made to keep bathrooms separate to keep men out.

But making this argument in good faith would require considering all areas where men are likely to harm women and establishing gender segregation in these areas. This would be most, if not all, areas in the United States showing that it is not bathrooms that are the danger, but bad men.

 

While it is tempting to focus on a single reason why Harris lost, her defeat was due to the combined effects of various factors ranging from the large to the miniscule. In the run up to the election, Trump supporters posted relentlessly on social media that gas and groceries were cheaper under Trump, and many seem to have voted for Trump to punish Harris and Biden for their economic situation. Harris has also been criticized for being locked into the status quo and was perceived as being a mere continuation of the unpopular Biden presidency and Trump has been able to convey the impression that he will shake things up and damage the status quo, which proved appealing. Running as a moderate centrist who might make a few minor tweaks was not, it turns out, a winning strategy for Harris. Voters correctly believe they are being harmed by the status quo, but their only viable alternatives were voting for someone already presiding over their suffering or for Trump.

It is certainly reasonable to consider the impact of sexism and racism, subconscious or otherwise. As the polls noted, Trump did very well with men, while Harris did well with women (but not well enough). While young men are not a huge demographic, the Democrats seem to be losing the culture war for them. For example, in the popular culture realms of video games, movies and shows, social media influencers have been successful in advancing the narrative that games, shows and movies are bad because they are “woke” (anti-male and anti-white). Other influencers that target young men also tend to be on the right. These culture war victories no doubt helped Trump, especially since the Democrats do not seem inclined to engage much in trying to win that demographic.

An obvious reason why Harris lost is that she was running against Trump. To use a slightly out of date Halloween metaphor, Trump is like a werewolf in that he seems immune to things that would destroy mere humans. For other politicians, a scandal or engaging in weird behavior would end their career, but Trump has proven largely immune to consequences. In a real sense, nothing Trump did had any negative impact on his support. If Harris had been running against anyone else who did what Trump did or acted like him, she would have almost certainly won by a landslide by simply pointing out the scandals, crimes and weird behavior. But she was running against Trump and thus none of this harmed him. For Democrats, their best hope is that no other Republican has Trump’s magical immunity.

Lastly, there is the rhetorical advantage held by Trump. Trump and his supporters fully embraced lying as a strategy, advancing lies about the 2020 election, about Harris, about immigration and anything else they could think of. The “fire hose of falsehood” is almost impossible to defeat with the truth, especially if the lies are repeated across news media and social media. Trump also had an advantage in his focus on negative emotions, such as anger and fear. As a matter of psychology, people weigh the negative more heavily than the positive and this fuels various powerful fallacies and rhetorical devices. While hope and change can win, fear and anger win more often. This focus on lying and negative emotions provides Republicans with a strong rhetorical advantage which Democrats will be hard pressed to counter.

 

110 Fallacies

Description:

Like the general Appeal to Silence fallacy, the Gish Gallop and Fire Hose of Falsehoods are tactics that involve taking a failure to respond as evidence for a claim.

As a rhetorical tool, the Gish Gallop is an attempt to overwhelm an opponent by presenting many arguments and claims with no concern for their quality or accuracy. The Gish Gallop was named in 1994 by anthropologist Eugenie Scott who claimed that Duane Gish used this tactic when arguing against evolution.

The Gish Gallop is somewhat like the debating tactic of spreading which involves making arguments as rapidly as possible in the hopes that the opponent will not be able to respond to all of them. The main distinction is that the Gish Gallop is an inherently bad faith technique that relies on rapidly presenting weak arguments, fallacies, partial truths, Straw Men, and lies in the hopes that the opponent will not be able to refute them all. The Gish Gallop can be seen as a metaphorical cluster bomb of fallacies and untruths.

While this technique lacks logical force, it can have considerable psychological force. The Gish Gallop relies on Brandolini’s Law, which is the idea that it takes more time and effort to refute a fallacy or false claim than it takes to make them. Effective use of a Gish Gallop will yield many unrefuted fallacies and false claims, and this can create the impression in the audience that the Gish Galloper has “won” the debate. The Gish Gallop can be combined with Moving the Goal Posts to create the illusion that at least some of the refutations have been addressed.

Psychologically, the side that seems to have made the most unrefuted arguments and claims might appear to be correct, especially if the Gish Galloper uses the Gish Gallop fallacy, which has the following general form:

 

Premise 1: Person A presented N arguments for claim C.

Premise 2: Person B, the opponent, refuted X of A’s arguments.

Premise 3: N is greater than X.

Conclusion: C is true.

 

This is fallacious reasoning because it is not the number of arguments that proves a claim, but the quality of the arguments. As an illustration, consider this silly example:

 

Premise 1: During a debate, Bob presented 123 arguments that 2+2=6.

Premise 2: Bob’s opponent Sally only refuted 2 of Bob’s arguments before time ran out.

Premise 3: 123 is greater than 2.

Conclusion: Therefore, 2+2=6

 

While the error in reasoning is obvious in such absurd cases, people can easily fall victim to this reasoning in more complicated or controversial cases, especially if the audience does not know the subject well.

One reason why this fallacy might be appealing is that it seems analogous to methods that do work. For example, a swarm of relatively weak ants can overwhelm a strong spider in virtue of their numbers, even though the spider might kill many of them. But argumentation usually does not work like that; weak arguments generally do not add together to overcome a single strong argument. So, the analogy is not a swarm of ants beating a spider, but a spider fighting weak ants one at a time.

Another reason the fallacy might seem appealing is that making claims or arguments that are not refuted could seem analogous to one team not being able to block every shot taken by their opponent. But the Gish Gallop would be best compared to a basketball team rapidly taking wild shots all over the place, not caring whether they are even made in the direction on the basket. The opposing team does not need to block those wild shots; they are not going to score any points. In the case of arguments, not refuting a bad argument does not prove that the argument is good. Not refuting a claim does not prove the claim is true. See Burden of Proof for a discussion of this.

While the Gish Gallop technique involves presenting at least some arguments, a related technique is to blast an opponent with a Fire Hose of Falsehoods. In this context, the Fire Hose of Falsehood is a rhetorical technique in which many falsehoods are quickly presented. The technique can also employ the rhetorical technique of repetition. As a matter of psychological force, the more times a person hears a claim, the more likely they are to believe it. But the number of times a claim is repeated is irrelevant to its truth. This method also often involves using multiple channels to distribute the falsehoods. For example, real users or bots on various social media platforms could be employed to spread the falsehood. This can have considerable psychological force since people are also inclined to believe a claim that (appears to) come from multiple sources. But the mere number of sources making a claim is irrelevant to the truth of that claim.

This technique can be used to achieve various ends, such as serving as a Red Herring to distract people from an issue or, in its classic role, as a propaganda technique. On a small scale, such as in a debate, it can be used to overwhelm an opponent because a person can usually tell a lie much faster than someone else can refute it. This technique can be used with Moving the Goal Post to exhaust an opponent and run out the clock.

It can also be employed as a variant of the Appeal to Silence. As a fallacy, the reasoning is that unless all the falsehoods made by someone are refuted, then their unrefuted falsehoods are true. As a fallacy, it has this generally form:

 

Premise 1: Person A makes N falsehoods.

Premise 2: Person B, the opponent, refuted X of A’s falsehoods.

Premise 3: N is greater than X.

Conclusion: The unrefuted falsehoods are true.

 

Laid bare like this, the bad logic is evident. Not refuting a falsehood does not make the falsehood true. When someone uses this fallacy, they will attempt to conceal the logical structure of this reasoning. They might, for example, simply say that their opponent has not refuted their claims and so their opponent must agree with them.

While this is a fallacy, it can be effective psychologically. If a person seems confident in their falsehoods and overwhelms their opponent with the sheer number of their lies, they might appear to have “won” the debate.  

 

Defense: To avoid being taken in by the Gish Gallop, the key is remembering that the support premises provide to a conclusion is based on the quality of the argument. The quantity of (unrefuted) arguments for a claim, by itself, does not serve as evidence for a claim. In the case of claims, a failure to refute all the claims made a person does not prove that the unrefuted claims are true; this applies to both the Gish Gallop and the Fire Hose of Falsehood.

If a Gish Gallop or Fire Hose of Falsehood is being used against you in a debate, you will almost certainly not be able to respond to all the arguments and claims. From a logical standpoint, one good option is to briefly point out your opponent’s technique and why it is defective. If you are arguing for a position, focus on your positive arguments and, if time permits, respond to the most serious objections. If you are arguing against a position, focus on your arguments against that position and, if possible, try to pre-empt the arguments your opponent is likely to use in their Gish Gallop. You can also sometimes group arguments and claims together and refute them in groups. For example, if an opponent uses multiple Straw Men, you can respond to all of these by pointing this out.

 

Example#1

Gus: “So, my opponent is a climate change scientist. That means she hates capitalism, so she is wrong. Also, these so-called climate change scientists say that humans are the only things that affect the climate, that is totally wrong. You remember Al Gore, right? Remember how silly that guy is? Plus, he lost the election! To George Bush! Lots of smart people don’t believe in climate change and how can the climate change if the earth is flat? Remember how they used to call it global warming? Now these scientists say that some places will get cooler! Also, remember that it snowed in Texas. So much for global warming! And we still had winter; it was cold some days. And everyone knows that we had ice ages in the past. But we don’t have an ice age now. So, climate changes without us; so much for the idea that humans are causing it.”

Moderator: “Time. Your turn Dr. Jones. You have two minutes.”

Dr. Jones: “So where to begin…”

Gus, two minutes later: “See, “Dr.” Jones did not refute all my arguments. So, climate change is all a hoax, as I said.

 

While industrial robots have been in service for a while, household robots have largely been limited to floor cleaning machines like the Roomba. But Physical Intelligence has built a robot that seems capable of doing some household tasks such as folding clothes. While a viable commercial products lie in the future, the dangers of household robots should be considered now. I will skip over the usual fear of the robot rebellion in which the machines turn against humans and focus on more likely dangers.

Like a PC or phone, a household robot runs the risk of software errors, glitches and other problems. While having an app crash on your phone or PC can be annoying, this usually does not put you at risk of physical harm. However, a malfunctioning household robot can be a danger. A viable household robot needs to be strong enough to engage in tasks such as cleaning, folding laundry, and moving objects. This entails that the robot will be strong enough to harm humans and pets. If a robot has a software or hardware issue that interferes with its ability to recognize objects and living creatures, it might try to fold a baby’s clothing while the baby is wearing them or mistake a sleeping cat for clothing or trash and put them in the washing machine or garbage can. Even more concerning is a robot designed to prepare food that misidentifies, for example, a human or pet as the meat to be sliced up and cooked for dinner.

Even laying aside such errors, a home can be a complicated place for a robot to operate in, as there will usually be multiple rooms, different types of furniture, different appliances, as well as various people and pets. This means that a household robot could easily become a hazard (or just useless) simply because of an inability to handle such a complicated and changing environment.

To be fair, these challenges can be addressed in various ways. One option is to limit robots to specific tasks and narrow areas of operation. This might require multiple robots in a home, each assigned to a specific area and set of tasks. For example, a knife wielding kitchen robot might have a fixed location in the kitchen and only be able to slice foods placed within a special  box. As another example, a laundry robot might be confined to a laundry room. Another way to reduce risk is through programming and hardware safeguards. For example, pets and humans might wear devices that provide household robots with their exact location so they can avoid them. This way the robot would not need to depend on visually distinguishing, for example, a cat from a sweater. While things could still go wrong (the ID tag might fail or fall off your cat’s collar), people are generally willing to accept some risk of injury and death for convenience. After all, any electrical appliance in your home can probably kill you and driving anywhere comes with the risk of injury or death. In addition to concerns about accidental injuries, there is also the threat of intentionally caused harm.

Household robots will almost certainly have online connections. On the one hand, this has many potential benefits such as being able to check in on your robots and taking manual control if, for example, one gets stuck in a corner. On the other hand, if you can access your robots online, that means that bad actors can do so as well, just as can happen today with any connected device. The critical difference is that a connected robot in your house means that a bad actor can gain a virtual physical presence in your home and use your robot in various ways.

It is certain that some people will take control of other peoples’ robots just for fun, to do various pranks such as having a robot move things around or make a small mess. But compromised robots could be used for a range of misdeeds, such as unlocking doors (although connected smart locks are obviously vulnerable), grabbing valuables and tossing them out windows, breaking things, and even attacking people and pets. This threat can be mitigated by good security practices but the only two ways to avoid a compromised robot is to either not have it connected or not have one at all.

As with autonomous vehicles, household robots also raise legal concerns about liability. If, for example, your robot injures a guest, there is the question of who has legal responsibility. On the plus side, household robots will be good for some lawyers as this will create a new, profitable subfield of law.

In closing, while the idea of having household robots seems appealing, their presence would create a new set of dangers, especially if they are connected and can be compromised.

 

In utopian science fiction, machines free humans so they can enjoy a life of leisure and enlightenment. In dystopian stories, machines enslave or exterminate humans. Reality has been, on average, a middletopia: a mean between the worst possible world and the best possible world. But a good case can be made that reality is more of a dystopia-lite; a bad world, but better than a full dystopia. While people still dream of utopia, there are those who are working hard to push us further into dystopia.

On a positive note, robots have replaced humans in some jobs that are dirty, dull, or dangerous. In some cases, the displaced humans have moved on to better jobs. In other cases, they have moved into other dirty, dull or dangerous jobs to wait for the machines to replace those jobs.  Machines have also replaced humans in jobs humans see as desirable and AI companies are determined to continue that trend, having selected writing and art as prime targets This leads to questions about what jobs will be left to humans and which will be taken over by the machines

There was once the intuitively appealing view that “creative” jobs would be safe from machines, but physical labor would be easily taken over by machines. On this view, machines will replace jobs such as those held by warehouse pickers, construction workers and janitors. Artists, philosophers, and teachers were supposed to be safe from the machine revolution. In some cases, the intuitive view was correct. Machines are routinely used for physical labor such as constructing cars and robot Socrates has yet to show up. However, the intuitive view about creative tasks is under attack as AI is used in journalism, law, academics and image creation. There are also tasks that would seem easy to automate, such as cleaning toilets or doing construction, that are very hard for robots, but easy for humans.

An example of a task that would seem ideal for automation is warehouse picking, especially of the sort done by Amazon. Amazon and other companies have automated some of the process, making use of robots in various tasks. But humans are still a critical part of the picking process. Since humans tend to have poor memories and get bored with picking, human pickers are “remote controlled” by computers that tell them what to do, then they tell the computers what they have done. For example, a human might be directed to pick five boxes of acne medicine, then five more boxes of acne medicine, then a copy of Fifty Shades of Gray and finally an Android phone. Humans are very good at picking and dealing with things like a broken bottle of shampoo in a box that robots still handle poorly.

In this sort of warehouse, the humans are being controlled by the machines. The machines take care of the higher-level activities of organizing orders and managing, while the human brain handles the task of selecting the right items and dealing with some tasks the machines cannot handle. While selecting seems simple, this is because it is simple for humans but not for existing robots. We are good at recognizing, grouping and distinguishing things and have the manual dexterity to perform the picking tasks, thanks to our opposable thumbs. Unfortunately for the human worker, these picking tasks are probably not very rewarding, creative or interesting and this is exactly the sort of drudge job that robots are supposed to free us from.

While computer-controlled warehouse work is one example of humans being directed by machines, it is easy to imagine this approach applied to tasks that require manual dexterity and what might be called “animal skills” such as object recognition. It is also easy to imagine this approach extended far beyond these jobs as a cost-cutting measure.

One way this approach could cut costs would be by allowing employers to buy “skilled” AI systems and use them to direct unskilled human labor. For simple jobs, a human might be directed via a headset linked to the AI that tells the human what to do, providing the “intelligence” guiding the body. For more complex jobs, a human might wear a VR style helmet with a machine directing the human via augmented reality. For example, an unskilled human could be walked through electrical or plumbing work by an AI. It should be noted that this technology could also be useful for people doing DIY projects and someday a person might be able to rent skills (via AI) as they now rent tools. But this could also impact the labor market, especially if almost anyone could use the technology effectively.

In this system, humans would provide the manual dexterity and all those highly evolved physical capacities. The AI would provide the direction, skill and “intelligence.” Since any adequately functional human body would suffice to serve the controlling AI, the value of such human labor would be low, and wages would match this value. Workers would be easy to replace because if a worker is fired or quits, then a new worker can simply don the interface device and get about the task with little training. This would also save in education costs as AI directed laborer would not need much education in job skills as these are by the AI. Humans would just need the basis skills allowing them to be directed properly by AI. This does point towards a dystopia in which human bodies are driven around through the workday by AI, then released and sent home in driverless cars. One could even imagine this technology being used in education: a human body providing an in-person presence while an AI directs the teaching process.

The employment of humans in these roles would only continue if humans were the cheapest form of available labor. If advances allow robot bodies to do these tasks cheaper, then it would make business sense to replace humans completely.  Alternatively, biological engineering might lead to the production of cost-effective engineered life forms that can replace humans; perhaps a pliable primate that is just smart enough to be directed by the AI. But not human enough to be considered a slave. Or, to go deeper into dystopia, perhaps a cyborg will be built that has hardware in place of the higher parts of the brain and thus serves as a meat robot driven around the job by the AI that is using the evolved biological features that cannot be replicated cost-effectively by machinery. While such things remain science fiction, now is the time to start considering the laws and policies that should govern remote controlled humans in the workplace.

 

Students and employers often complain that college does not prepare them for the real world of filling jobs and this complaint has some merit. But what is the real world of jobs like for most workers? Professor David Graeber got considerable media attention when he published his book Bullshit Jobs: A Theory. He claims that millions of people are working jobs they know are meaningless and unnecessary. Researcher Simon Walo decided to test Graeber’s theory and found that his investigation supported Graeber’s view. While Graeber’s view can be debated, it is reasonable to believe that some jobs are BS all the time and all jobs are BS some of the time. Thus, if educators are to prepare students for working in the real world, they must prepare them for the BS of the workplace. AI can prove useful here.

In an optimistic sci-fi view of the future, AI exists to relieve humans of the dreadful four Ds of bad jobs: the Dangerous, the Degrading, the Dirty, and the Dull. In a bright future, general AI would assist, but not replace, humans in creative and scientific endeavors. In dystopian sci-fi views of the AI future, AI enslaves or exterminates humanity. In dystopia lite, a few humans use AI to make life worse for many humans, such as by replacing humans with AI in good and rewarding jobs.  Much of the effort in AI development seems aimed at making this a reality.

As an example, it is feared that AI will put writers and artists out of work, so when the Hollywood writers went on strike, they wanted protection from being replaced by AI. They succeeded in this goal, but there remains a reasonable question about how great the threat of AI is in terms of its being able to replace humans in jobs humans want to do. Fortunately for humans doing creative and meaningful work, AI is not very good at these tasks. As Arvind Narayanan and Sayash Kapoor have argued, AI of this sort seems to be most useful at doing useless things. But this can be useful for workers and educators should train students to use AI to do these useless things. This might seem a bit crazy but makes perfect sense in our economic reality.

Some jobs are useless, and all jobs have useless tasks. Although his view can be challenged, Graeber came up with three categories of useless jobs. His “flunkies” category consists of people paid to make the rich and important look more rich and more important.  This can be expanded to include all decorative minions. “Goons” are people filling positions existing only because a competitor company created similar jobs. Finally, there are the  “box tickers”, which can be refined to cover jobs workers see as useless but also produce work whose absence would have no meaningful effect on the world.

It must be noted that what is perceived as useless is a matter of values and will vary between persons and in different contexts. To use a silly example, imagine the Florida state legislature mandated that all state universities send in a monthly report in the form of a haiku. Each month, someone will need to create and email the haiku. This task seems useless. But imagine that if a school fails to comply, they lose $1 million in funding. This makes the task useful for the school as a means of protecting their funding. Fortunately, AI can easily complete this useless useful task.

As a serious example, suppose a worker must write reports for management based on bullet points given in presentations. Management, of course, never reads the reports and they are thus useless but required by company policy. While a seemingly rational solution is to eliminate the reports, that is not how bureaucracies usually operate in the “real world.” Fortunately, AI can make the worker’s task easier: they can use AI to transform the bullet points into a report and use the saved time for more meaningful tasks (or viewing social media). Management can also use AI to summarize the report into bullet points. While it would seem more rational to eliminate the reports, this is not how the real world usually works. But what should educators do with AI in their classrooms in the context of useless tasks and jobs?

While this will need to vary from class to class, relevant educators should consider a general overview of jobs and task categories in terms of usefulness and the ability of AI to do these jobs and tasks.  Faculty could then identify the likely useless jobs and useless tasks their students will probably do in the real world. They can then consider how these tasks can be done using AI. This will allow them to create lessons and assignments to give students the skills to use AI to complete useless tasks quickly and with minimal effort. This can allow workers to spend more time on useful work, assuming their jobs have any such tasks.

In closing, my focus has been on using AI for useless tasks. Teaching students to use AI for useful tasks is another subject entirely and while not covered here is certainly worthy of consideration. And here is an AI generated haiku:

 

Eighty percent rise

FAMU students excel

In their learning’s ligh

 

One of the many fears about AI is that it will be weaponized by political candidates. In a proactive move, some states have already created laws regulating its use. Michigan has a law aimed at the deceptive use of AI that requires a disclaimer when a political ad is “manipulated by technical means and depicts speech or conduct that did not occur.”  My adopted state of Florida has a similar law that political ads using generative AI requires a disclaimer. While the effect of disclaimers on elections remains to be seen, a study by New York University’s Center on Technology Policy found that research subjects saw candidates who used such disclaimers as “less trustworthy and less appealing.”

The subjects watched fictional political ads, some of which had AI disclaimers, and then rated the fictional candidates on trustworthiness, truthfulness and how likely they were to vote for them. The study showed that the disclaimers had a small but statistically significant negative impact on the perception of these fictional candidates. This occurred whether the AI use was deceptive or more harmless. The study subjects also expressed a preference for using disclaimers anytime AI was used in an ad, even when the use was harmless, and this held across party lines. As attack ads are a common strategy, it is interesting that the study found that such ads with an AI disclaimer backfired, and the study subjects evaluated the target as more trustworthy and appealing than the attacker.

If the study results hold for real ads, these findings might serve to deter the use of AI in political ads, especially attack ads. But it is worth noting that the study did not involve ads featuring actual candidates. Out in the wild, voters tend to be tolerant of lies or even like them when the lies support their political beliefs. If the disclaimer is seen as stating or implying that the ad contains untruths, it is likely that the negative impact of the disclaimer would be less or even nonexistent for certain candidates or messages. This is something that will need to be assessed in the wild.

The findings also suggest a diabolical strategy in which an attack ad with the AI disclaimer is created to target the candidate the creators support. These supporters would need to take care to conceal their connection to the candidate, but this is easy in the current dark money reality of American politics. They would, of course, need to calculate the risk that the ad might work better as an attack ad than a backfire ad. Speaking of diabolical, it might be wondered why there are disclaimer laws rather than bans.

The Florida law requires a disclaimer when AI is used to “depict a real person performing an action that did not actually occur, and was created with the intent to injure a candidate or to deceive regarding a ballot issue.” A possible example of such use seems to occur in an ad by DeSantis’s campaign falsely depicting Trump embracing Fauci in 2023.   It is noteworthy that the wording of the law entails that the intentional use of AI to harm and deceive in political advertising is allowed but merely requires a disclaimer. That is, an ad is allowed to lie but with a disclaimer. This might strike many as odd, but follows established law.

As the former head of the FCC under Obama Tom Wheeler notes, lies are allowed in political ads on federally regulated broadcast channels. As would be suspected, the arguments used to defend allowing lies in political ads are based on the First Amendment. This “right to lie” provides some explanation as to why these laws do not ban the use of AI. It might be wondered why there is not a more general law requiring a disclaimer for all intentional deceptions in political ads. A practical reason is that it is currently much easier to prove the use of AI than it is to prove intentional deception in general. That said, the Florida law specifies intent and the use of AI to depict something that did not occur and proving both does present a challenge, especially since people can legally lie in their ads and insist the depiction is of something real.

 Cable TV channels, such as CNN, can reject ads. In some cases, stations can reject ads from non-candidate outside groups, such as super PACs. Social media companies, such as X and Facebook, have considerable freedom in what they can reject. Those defending this right of rejection point out the oft forgotten fact that the First Amendment legal right applies to the actions of the government and not private businesses, such as CNN and Facebook. Broadcast TV, as noted above, is an exception to this. The companies that run political ads will need to develop their own AI policies while also following the relevant laws.

While some might think that a complete ban on AI would be best, the AI hype has made this a bad idea. This is because companies have rushed to include AI in as many products as possible and to rebrand existing technologies as AI. For example, the text of an ad might be written in Microsoft Word with Grammarly installed and Grammarly is pitching itself as providing AI writing assistance. Programs like Adobe Illustrator and Photoshop also have AI features that have innocuous uses, such as automating the process of improving the quality of a real image or creating a background pattern that might be used in a print ad.  It would obviously be absurd to require a disclaimer for such uses of AI.

For more on cyber policy issues: Hewlett Foundation Cyber Policy Institute (famu.edu)

 

When ChatGPT and its competitors became available to students, some warned of an AI apocalypse in education.  This fear mirrored the broader worries about the over-hyped dangers of AI. This is not to deny that AI presents challenges and danger, but we need to have a realistic view of the threats and promises so that rational policies and practices can be implemented.

As a professor and the chair of the General Education Assessment Committee at Florida A&M University I assess the work of my students, and I am involved with the broader task of assessing general education. In both cases a key challenge is determining how much of the work turned in by students is their work. After all, we want to know how our students are performing and not how AI or some unknown writer is performing.

While students have been cheating since the advent of education, it was feared AI would cause a cheating tsunami. This worry seemed sensible since AI makes cheating easy, free and harder to detect.  Large language models allow “plagiarism on demand” by generating new text each time. With the development of software such as Turnitin, detecting traditional plagiarism became automated and fast. These tools also identify the sources used in plagiarism, providing professors with reliable evidence. But large language models defeat this method of detection, since they generate original text. Ironically, some faculty now see a 0% plagiarism score on Turnitin as a possible red flag. But has an AI cheating tsunami washed over education?

Determining how many students are cheating is like determining how many people are committing crime: one only knows how many people have been caught and not how many people are doing it. Because of this, caution must be exercised when drawing a conclusion about the extent of cheating otherwise one runs the risk of falling victim to the fallacy of overconfident inference from unknown statistics.

In the case of AI cheating in education, one source of data is Turnitin’s AI detection software. Over the course of a year, the service checked 200 million assignments and flagged AI use in 1 in 10 assignments while 3 in 100 were flagged as mostly AI. These results have remained stable, suggesting that AI cheating is neither a tsunami nor increasing. But this assumes that the AI detection software is accurate.

Turnitin claims it has a false positive rate of 1%. In addition to Turnitin, there are other AI detection services that have been evaluated, with the worst having an accuracy of 38% and the best claimed to have a 90% accuracy. But there are two major problems with the accuracy of existing plagiarism detection software.

The first, as the title of a recent paper notes, “GPT detectors are biased against non-native English writers.” As the authors noted, while AI detectors are nearly perfectly accurate in evaluating essays by U.S. born eighth-graders, they misclassified 61.22% of TOEFL essays written by non-native English students. All seven of the tested detectors incorrectly flagged 18 of the 91 TOEFL essays and 89 of 91 of the essays (97%) were flagged by at least one detector.

The second is that AI detectors can be fooled. The current detectors usually work by evaluating perplexity as a metric. Perplexity, which is a measure of such factors as lexical diversity and grammatical complexity, can be created in AI text by using simple prompt engineering. For example, a student could prompt ChatGPT to rewrite the text using more literary language. There is also a concern that the algorithms used in proprietary detection software will be kept secret, so it will be difficult to determine what biases and defects they might have.

Because of these problems, educators should be cautious when using such software to evaluate student work. This is especially true in cases in which a student is assigned a failing grade or even accused of academic misconduct because they are suspected of using AI. In the case of traditional cheating, a professor could have clear evidence in the form of copied text. In the case of AI detection, the professor only has the evaluation of software whose inner workings are most likely not available for examination and whose true accuracy remains unknown. Because of this, educational institutes need to develop rational guidelines for best practices when using AI detection software. But the question remains as to how likely it is that students will engage in cheating now that ChatGPT and its ilk are readily available.

Stanford scholars Victor Lee and Denise Pope have been studying cheating, and past surveys over 15 years showed that 60-70% of students admitted to cheating. In 2023 the percentage stayed about the same or decreased slightly, even when students were asked about using AI. While there is the concern that cheaters would lie about cheating, Pope and Lee use anonymous surveys and take care in designing the survey questions. While cheating remains a problem, AI has not increased it, and the feared tsunami seems to have died far offshore.

This does make sense in that cheating has always been relatively easy, and the decision to cheat is more a matter of moral and practical judgment rather than based on the available technology. While technology can provide new means of cheating, a student must still be willing to cheat, and that percentage seems to be relatively stable in the face of changing technology.  That said, large language models are a new technology and their long-term impact in cheating is something that needs to be determined. But, so far, the doomsayers predictions have not come true. Fairness requires acknowledging that this might be because educators took effective action to prevent this; it would be poor reasoning to fall for the prediction fallacy.

As a final point of discussion, it is worth considering that  perhaps AI has not resulted in a surge in cheating because it is not a great tool for this. As Arvind Narayanan and Sayash Kapoor have argued, AI seems to be most useful at doing useless things. To be fair, assignments in higher education can be useless things of the type AI is good at doing. But if AI is being used to complete useless assignments, then this is a problem with the assignments (and the professors) and not AI.

In closing, while there is also the concern that AI will get better at cheating or that as students grow up with AI, they will be more inclined to use it to cheat. And, of course, it is worth considering whether such use should be considered cheating or if it is time to retire some types of assignments and change our approach to education as, for example, we did when calculators were accepted.

 

For more on cyber policy issues: Hewlett Foundation Cyber Policy Institute (famu.edu)