Students and employers often complain that college does not prepare them for the real world of filling jobs and this complaint has some merit. But what is the real world of jobs like for most workers? Professor David Graeber got considerable media attention when he published his book Bullshit Jobs: A Theory. He claims that millions of people are working jobs they know are meaningless and unnecessary. Researcher Simon Walo decided to test Graeber’s theory and found that his investigation supported Graeber’s view. While Graeber’s view can be debated, it is reasonable to believe that some jobs are BS all the time and all jobs are BS some of the time. Thus, if educators are to prepare students for working in the real world, they must prepare them for the BS of the workplace. AI can prove useful here.

In an optimistic sci-fi view of the future, AI exists to relieve humans of the dreadful four Ds of bad jobs: the Dangerous, the Degrading, the Dirty, and the Dull. In a bright future, general AI would assist, but not replace, humans in creative and scientific endeavors. In dystopian sci-fi views of the AI future, AI enslaves or exterminates humanity. In dystopia lite, a few humans use AI to make life worse for many humans, such as by replacing humans with AI in good and rewarding jobs.  Much of the effort in AI development seems aimed at making this a reality.

As an example, it is feared that AI will put writers and artists out of work, so when the Hollywood writers went on strike, they wanted protection from being replaced by AI. They succeeded in this goal, but there remains a reasonable question about how great the threat of AI is in terms of its being able to replace humans in jobs humans want to do. Fortunately for humans doing creative and meaningful work, AI is not very good at these tasks. As Arvind Narayanan and Sayash Kapoor have argued, AI of this sort seems to be most useful at doing useless things. But this can be useful for workers and educators should train students to use AI to do these useless things. This might seem a bit crazy but makes perfect sense in our economic reality.

Some jobs are useless, and all jobs have useless tasks. Although his view can be challenged, Graeber came up with three categories of useless jobs. His “flunkies” category consists of people paid to make the rich and important look more rich and more important.  This can be expanded to include all decorative minions. “Goons” are people filling positions existing only because a competitor company created similar jobs. Finally, there are the  “box tickers”, which can be refined to cover jobs workers see as useless but also produce work whose absence would have no meaningful effect on the world.

It must be noted that what is perceived as useless is a matter of values and will vary between persons and in different contexts. To use a silly example, imagine the Florida state legislature mandated that all state universities send in a monthly report in the form of a haiku. Each month, someone will need to create and email the haiku. This task seems useless. But imagine that if a school fails to comply, they lose $1 million in funding. This makes the task useful for the school as a means of protecting their funding. Fortunately, AI can easily complete this useless useful task.

As a serious example, suppose a worker must write reports for management based on bullet points given in presentations. Management, of course, never reads the reports and they are thus useless but required by company policy. While a seemingly rational solution is to eliminate the reports, that is not how bureaucracies usually operate in the “real world.” Fortunately, AI can make the worker’s task easier: they can use AI to transform the bullet points into a report and use the saved time for more meaningful tasks (or viewing social media). Management can also use AI to summarize the report into bullet points. While it would seem more rational to eliminate the reports, this is not how the real world usually works. But what should educators do with AI in their classrooms in the context of useless tasks and jobs?

While this will need to vary from class to class, relevant educators should consider a general overview of jobs and task categories in terms of usefulness and the ability of AI to do these jobs and tasks.  Faculty could then identify the likely useless jobs and useless tasks their students will probably do in the real world. They can then consider how these tasks can be done using AI. This will allow them to create lessons and assignments to give students the skills to use AI to complete useless tasks quickly and with minimal effort. This can allow workers to spend more time on useful work, assuming their jobs have any such tasks.

In closing, my focus has been on using AI for useless tasks. Teaching students to use AI for useful tasks is another subject entirely and while not covered here is certainly worthy of consideration. And here is an AI generated haiku:

 

Eighty percent rise

FAMU students excel

In their learning’s ligh

 

One of the many fears about AI is that it will be weaponized by political candidates. In a proactive move, some states have already created laws regulating its use. Michigan has a law aimed at the deceptive use of AI that requires a disclaimer when a political ad is “manipulated by technical means and depicts speech or conduct that did not occur.”  My adopted state of Florida has a similar law that political ads using generative AI requires a disclaimer. While the effect of disclaimers on elections remains to be seen, a study by New York University’s Center on Technology Policy found that research subjects saw candidates who used such disclaimers as “less trustworthy and less appealing.”

The subjects watched fictional political ads, some of which had AI disclaimers, and then rated the fictional candidates on trustworthiness, truthfulness and how likely they were to vote for them. The study showed that the disclaimers had a small but statistically significant negative impact on the perception of these fictional candidates. This occurred whether the AI use was deceptive or more harmless. The study subjects also expressed a preference for using disclaimers anytime AI was used in an ad, even when the use was harmless, and this held across party lines. As attack ads are a common strategy, it is interesting that the study found that such ads with an AI disclaimer backfired, and the study subjects evaluated the target as more trustworthy and appealing than the attacker.

If the study results hold for real ads, these findings might serve to deter the use of AI in political ads, especially attack ads. But it is worth noting that the study did not involve ads featuring actual candidates. Out in the wild, voters tend to be tolerant of lies or even like them when the lies support their political beliefs. If the disclaimer is seen as stating or implying that the ad contains untruths, it is likely that the negative impact of the disclaimer would be less or even nonexistent for certain candidates or messages. This is something that will need to be assessed in the wild.

The findings also suggest a diabolical strategy in which an attack ad with the AI disclaimer is created to target the candidate the creators support. These supporters would need to take care to conceal their connection to the candidate, but this is easy in the current dark money reality of American politics. They would, of course, need to calculate the risk that the ad might work better as an attack ad than a backfire ad. Speaking of diabolical, it might be wondered why there are disclaimer laws rather than bans.

The Florida law requires a disclaimer when AI is used to “depict a real person performing an action that did not actually occur, and was created with the intent to injure a candidate or to deceive regarding a ballot issue.” A possible example of such use seems to occur in an ad by DeSantis’s campaign falsely depicting Trump embracing Fauci in 2023.   It is noteworthy that the wording of the law entails that the intentional use of AI to harm and deceive in political advertising is allowed but merely requires a disclaimer. That is, an ad is allowed to lie but with a disclaimer. This might strike many as odd, but follows established law.

As the former head of the FCC under Obama Tom Wheeler notes, lies are allowed in political ads on federally regulated broadcast channels. As would be suspected, the arguments used to defend allowing lies in political ads are based on the First Amendment. This “right to lie” provides some explanation as to why these laws do not ban the use of AI. It might be wondered why there is not a more general law requiring a disclaimer for all intentional deceptions in political ads. A practical reason is that it is currently much easier to prove the use of AI than it is to prove intentional deception in general. That said, the Florida law specifies intent and the use of AI to depict something that did not occur and proving both does present a challenge, especially since people can legally lie in their ads and insist the depiction is of something real.

 Cable TV channels, such as CNN, can reject ads. In some cases, stations can reject ads from non-candidate outside groups, such as super PACs. Social media companies, such as X and Facebook, have considerable freedom in what they can reject. Those defending this right of rejection point out the oft forgotten fact that the First Amendment legal right applies to the actions of the government and not private businesses, such as CNN and Facebook. Broadcast TV, as noted above, is an exception to this. The companies that run political ads will need to develop their own AI policies while also following the relevant laws.

While some might think that a complete ban on AI would be best, the AI hype has made this a bad idea. This is because companies have rushed to include AI in as many products as possible and to rebrand existing technologies as AI. For example, the text of an ad might be written in Microsoft Word with Grammarly installed and Grammarly is pitching itself as providing AI writing assistance. Programs like Adobe Illustrator and Photoshop also have AI features that have innocuous uses, such as automating the process of improving the quality of a real image or creating a background pattern that might be used in a print ad.  It would obviously be absurd to require a disclaimer for such uses of AI.

For more on cyber policy issues: Hewlett Foundation Cyber Policy Institute (famu.edu)

 

When ChatGPT and its competitors became available to students, some warned of an AI apocalypse in education.  This fear mirrored the broader worries about the over-hyped dangers of AI. This is not to deny that AI presents challenges and danger, but we need to have a realistic view of the threats and promises so that rational policies and practices can be implemented.

As a professor and the chair of the General Education Assessment Committee at Florida A&M University I assess the work of my students, and I am involved with the broader task of assessing general education. In both cases a key challenge is determining how much of the work turned in by students is their work. After all, we want to know how our students are performing and not how AI or some unknown writer is performing.

While students have been cheating since the advent of education, it was feared AI would cause a cheating tsunami. This worry seemed sensible since AI makes cheating easy, free and harder to detect.  Large language models allow “plagiarism on demand” by generating new text each time. With the development of software such as Turnitin, detecting traditional plagiarism became automated and fast. These tools also identify the sources used in plagiarism, providing professors with reliable evidence. But large language models defeat this method of detection, since they generate original text. Ironically, some faculty now see a 0% plagiarism score on Turnitin as a possible red flag. But has an AI cheating tsunami washed over education?

Determining how many students are cheating is like determining how many people are committing crime: one only knows how many people have been caught and not how many people are doing it. Because of this, caution must be exercised when drawing a conclusion about the extent of cheating otherwise one runs the risk of falling victim to the fallacy of overconfident inference from unknown statistics.

In the case of AI cheating in education, one source of data is Turnitin’s AI detection software. Over the course of a year, the service checked 200 million assignments and flagged AI use in 1 in 10 assignments while 3 in 100 were flagged as mostly AI. These results have remained stable, suggesting that AI cheating is neither a tsunami nor increasing. But this assumes that the AI detection software is accurate.

Turnitin claims it has a false positive rate of 1%. In addition to Turnitin, there are other AI detection services that have been evaluated, with the worst having an accuracy of 38% and the best claimed to have a 90% accuracy. But there are two major problems with the accuracy of existing plagiarism detection software.

The first, as the title of a recent paper notes, “GPT detectors are biased against non-native English writers.” As the authors noted, while AI detectors are nearly perfectly accurate in evaluating essays by U.S. born eighth-graders, they misclassified 61.22% of TOEFL essays written by non-native English students. All seven of the tested detectors incorrectly flagged 18 of the 91 TOEFL essays and 89 of 91 of the essays (97%) were flagged by at least one detector.

The second is that AI detectors can be fooled. The current detectors usually work by evaluating perplexity as a metric. Perplexity, which is a measure of such factors as lexical diversity and grammatical complexity, can be created in AI text by using simple prompt engineering. For example, a student could prompt ChatGPT to rewrite the text using more literary language. There is also a concern that the algorithms used in proprietary detection software will be kept secret, so it will be difficult to determine what biases and defects they might have.

Because of these problems, educators should be cautious when using such software to evaluate student work. This is especially true in cases in which a student is assigned a failing grade or even accused of academic misconduct because they are suspected of using AI. In the case of traditional cheating, a professor could have clear evidence in the form of copied text. In the case of AI detection, the professor only has the evaluation of software whose inner workings are most likely not available for examination and whose true accuracy remains unknown. Because of this, educational institutes need to develop rational guidelines for best practices when using AI detection software. But the question remains as to how likely it is that students will engage in cheating now that ChatGPT and its ilk are readily available.

Stanford scholars Victor Lee and Denise Pope have been studying cheating, and past surveys over 15 years showed that 60-70% of students admitted to cheating. In 2023 the percentage stayed about the same or decreased slightly, even when students were asked about using AI. While there is the concern that cheaters would lie about cheating, Pope and Lee use anonymous surveys and take care in designing the survey questions. While cheating remains a problem, AI has not increased it, and the feared tsunami seems to have died far offshore.

This does make sense in that cheating has always been relatively easy, and the decision to cheat is more a matter of moral and practical judgment rather than based on the available technology. While technology can provide new means of cheating, a student must still be willing to cheat, and that percentage seems to be relatively stable in the face of changing technology.  That said, large language models are a new technology and their long-term impact in cheating is something that needs to be determined. But, so far, the doomsayers predictions have not come true. Fairness requires acknowledging that this might be because educators took effective action to prevent this; it would be poor reasoning to fall for the prediction fallacy.

As a final point of discussion, it is worth considering that  perhaps AI has not resulted in a surge in cheating because it is not a great tool for this. As Arvind Narayanan and Sayash Kapoor have argued, AI seems to be most useful at doing useless things. To be fair, assignments in higher education can be useless things of the type AI is good at doing. But if AI is being used to complete useless assignments, then this is a problem with the assignments (and the professors) and not AI.

In closing, w8 There is also the concern that AI will get better at cheating or that as students grow up with AI, they will be more inclined to use it to cheat. And, of course, it is worth considering whether such use should be considered cheating or if it is time to retire some types of assignments and change our approach to education as, for example, we did when calculators were accepted.

 

For more on cyber policy issues: Hewlett Foundation Cyber Policy Institute (famu.edu)

While the anti-abortion movement claimed a great victory when the Supreme Court overturned Roe v. Wade, the Republican Party has learned that this victory proved deeply unpopular with the American people. While Democrats favor abortion rights more than Republicans, 64% of surveyed voters say abortion should be always or mostly legal. While some Republican controlled state legislatures have imposed extreme restrictions on abortion rights, abortion rights supporters have won in several state ballots. As this is being written several more states (including my adopted state of Florida) have abortion rights measures on the ballot. Given that the anti-abortion view is held by a minority of voters, it is likely that these measures will pass in many states.

Because the anti-abortion position of the Republican party has proven unpopular and has imposed a political cost, the party’s rhetoric has shifted. The current rhetorical spin is that the Republican party is not against abortion rights. Rather, the party is for states’ rights.  Those critical of this rhetoric like to point out that appeals to states’ rights was also a tactic employed by the southern states to defend slavery. While the analogy is imperfect, the comparison does have some merit.

The states’ rights argument for slavery amounts to contending that the states should have the freedom to decide whether they will allow slavery, and this is usually phrased in terms of an appeal to democracy. That is, the citizens of the state should vote to decide whether some people can be denied freedom and be owned. An obvious defect with this reasoning is that it rests on the assumption that it is a matter of freedom of choice to take away freedom of choice.

A similar defect arises with the states’ rights rhetoric in the abortion debate. If it is accepted that the citizens of the state have the right to decide the issue of abortion because they should be free of federal law, then it is problematic to argue that the state has the right to take away the freedom of women to decide whether they get an abortion. If choice is important, then having legal abortion allows women to choose: a woman is not mandated to have an abortion nor forbidden, so she can make the choice. Hence, this rhetorical move entails that abortion should be legal nationwide.

Someone might counter this by taking the anti-abortion stance that women should not be allowed that choice, perhaps by drawing an analogy with murder. After all, they might argue, we would not want people free to chose murder. But the problem with that reply is that by using the states’ rights rhetoric, the Republican party has acknowledged that the legality of abortion should be a matter of choice, and this makes it difficult to argue that abortion should not be a choice for individual women.

While intended to address the backlash from the unpopularity of the success of the anti-abortion movement, this rhetoric has caused backlash from that movement. Some anti-abortion activists have urged their followers to withdraw their support of Trump. There is the question of how much impact this will have on the election, given that anti-abortion voters will almost certainly not vote for Harris. But it might cause a few single-issue voters to stay home on election day or not vote for Trump.

Pro-abortion rights people are almost certainly not going to be fooled by this rhetoric, since they know this is a rhetorical shift and not a change in policy or goals. While it might win over a few of the undecided voters, it seems to have two effects. The first is that it gives Republicans an established rhetorical talking point to use whenever they are asked about abortion. The second is that it provides those who want to vote for anti-abortion Republicans but who are not anti-abortion themselves a way to rationalize their vote. They can insist the Republican party is “pro-choice” because their new rhetorical position is that the states should chose. But not that women should chose.

The states’ rights rhetorical move could be an effective strategy. While the anti-abortion movement would prefer a federal abortion ban, having the states decide is better for them than having abortion legal nationwide. After all, some states have put abortion bans in place and these have been wins for the movement. But the obvious downside for this movement is that some states have put in place protections for abortion rights, despite the anti-abortion movement’s desire to make the choice for everyone.

In closing, the states’ rights argument is a position that cannot be effectively defended, because its foundation is the principle of choice, and this entails that it is the women who should make the choice for themselves.

 

Description:

This fallacy occurs when someone uncritically rejects a prediction or the effectiveness of the responses to it when the predicted outcome does not occur:

Premise 1: Prediction P predicted outcome X if response R is not taken.

Premise 2: Response R was taken (based on prediction P).

Premise 3: X did not happen, so Prediction P was wrong.

Conclusion: Response R should not have been taken (or there is no longer a need to take Response R).

 

The error occurs because of a failure to consider the obvious: if there is an effective response to a predicted outcome, then the prediction will appear to be “wrong” because the predicted outcome will not occur.

While a prediction that turns out to be “wrong” is technically wrong, the error here is to uncritically conclude that this proves the response was not needed (or there is no longer any need to keep responding). The initial prediction assumes there will not be a response and is usually made to argue for responding. If the response is effective, then the predicted outcome will not occur, which is the point of responding. To reason that the “failure” of the prediction shows that the response was mistaken or no longer needed is thus a mistake in reasoning.

To use a silly analogy, imagine that we are in a car and driving towards a cliff. You make the prediction that if we keep going, we will go off the cliff and die. So, I turn the wheel and avoid the cliff. If backseat Billy gets angry and says that there was no reason to turn the wheel or that I should turn it back because we did not die in a fiery explosion, Billy is falling for this fallacy. After all, if we did not turn, then we would have probably died. And if we turn back too soon, then we will probably die. The point of turning is so that the predicted outcome of death will not occur.

A variation on this fallacy involves inferring the prediction was bad because it turned out to be “wrong”:

Premise 1: Prediction P predicted outcome X if response R is not taken.

Premise 2: Response R was taken based on prediction P.

Premise 3: X did not happen.

Conclusion: Prediction P was wrong about X occurring if response R was not taken.

 

While the prediction would be “wrong” in that the predicted outcome did not occur, this does not disprove the prediction that X would occur without the response. Going back to the car example, the prediction that we would die if we drove of the cliff if we do not turn is not disproven if we turn and then do not die. In fact, that is the result we want.

Since it lacks logical force, this fallacy gains its power from psychological force. Sorting out why something did not happen can be difficult and it is easier to go along with biases, preconceptions, and ideology than it is to sort out a complicated matter.

This fallacy can be committed in good faith out of ignorance. When committed in bad faith, the person using it is aware of the fallacy. The intent is often to use this fallacy to argue against continuing the response or as a bad faith attack on those who implemented or argued for the response. For example, someone might argue in bad faith that a tax cut was not needed to avoid a recession because the predicted recession did not occur after the tax cut. While the tax cut might have not been a factor, simply asserting that they were not needed because the recession did not occur would commit this fallacy.

 

Defense: To avoid inflicting this fallacy on yourself or falling for it, the main defense is to keep in mind that a prediction based on the assumption that a response will not be taken can turn out to be “wrong” if that response is taken. Also, you should remember that the failure of a predicted event to occur after a response is made to prevent it would count as some evidence that the response was effective rather than as proof it was not needed. But care should be taken to avoid uncritically inferring that the response was needed or effective because the predicted event did not occur.

 

Example #1

Julie: “The doctor said that my blood pressure would keep going up unless I improved my diet and started exercising.”

Kendra: “How is your blood pressure now?”

Julie: “Pretty good. I guess I don’t need to keep eating all those vegetables and I can stop going on those walks.”

Kendra: “Why?”

Julie: “Well, she was wrong. My blood pressure did not go up.”

Example #2

Robert: “While minority voters might have needed some protection long ago, I am confident we can remove all those outdated safeguards.”

Kelly: “Why? Aren’t they still needed? Aren’t they what is keeping some states from returning to the days of Jim Crow?”

Robert: “Certainly not. People predicted that would happen, but it didn’t. So, we obviously no longer need those protections in place.”

Kelly: “But, again, aren’t these protections what is keeping that from happening?”

Robert: “Nonsense. Everything will be fine.”

Example #3

Lulu: “I am so mad. We did all this quarantining, masking, shutting down, social distance and other dumb thing for so long and it is obvious we did not need to.”

Paula: “I didn’t like any of that either, but the health professionals say it saved a lot of lives.”

Lulu: “Yeah, those health professionals said that millions of people would die if we didn’t do all that stupid stuff. But look, we didn’t have millions die. So, all that was just a waste.”

Paula: “Maybe doing all that was why more people didn’t die.”

Lulu: “That is what they want you to think.”

 

Since I often reference various fallacies in blog posts I decided to also post the fallacies. These are from my book 110 Fallacies.

Description:

This fallacy is committed when a person places unwarranted confidence in drawing a conclusion from statistics that are unknown.

 

Premise 1: “Unknown” statistical data D is presented.

Conclusion: Claim C is drawn from D with greater confidence than D warrants.

 

Unknown statistical data is just that, statistical data that is unknown. This data is different from “data” that is simply made up because it has at least some foundation.

One type of unknown statistical data is when educated guesses are made based on limited available data. For example, when experts estimate the number of people who use illegal drugs, they are making an educated guess. As another example, when the number of total deaths in any war is reported, it is (at best) an educated guess because no one knows for sure exactly how many people have been killed.

Another common type of unknown statistical data is when it can only be gathered in ways that are likely to result in incomplete or inaccurate data. For example, statistical data about the number of people who have affairs is likely to be in this category. This is because people generally try to conceal their affairs.

Obviously, unknown statistical data is not good data.  But drawing an inference from unknown data need not always be unreasonable or fallacious. This is because the error in the fallacy is being more confident in the conclusion than the unknown data warrants. If the confidence in the conclusion is proportional to the support provided by the evdience, then no fallacy would be committed.

For example, while the exact number of people killed during the war in Afghanistan will remain unknown, it is reasonable to infer from the known data that many people have died. As another example, while the exact number of people who do not pay their taxes is unknown, it is reasonable to infer that the government is losing some revenue because of this.

The error that makes this a fallacy is to place too much confidence in a conclusion drawn from unknown data. Or to be a bit more technical, to overestimate the strength of the argument based on statistical data that is not adequately known.

This is an error of reasoning because, obviously enough, a conclusion is being drawn that is not adequately justified by the premises. This fallacy can be committed in ignorance or intentionally committed.

Naturally, the way in which the statistical data is gathered also needs to be assessed to determine whether other errors have occurred, but that is another matter.

 

Defense: The main defense against this fallacy is to keep in mind that inferences drawn from unknown statistics need to be proportional to the quality of the evidence. The error, as noted above, is placing too much confidence in unknown statistics.

Sorting out exactly how much confidence can be placed in such statistics can be difficult, but it is wise to be wary of any such reasoning. This is especially true when the unknown statistics are being used by someone who is likely to be biased. That said, to simply reject claims because they are based on unknown statistics would also be an error.

 

Example #1

“Several American Muslims are known to be terrorists or at least terrorist supporters. As such, I estimate that there are hundreds of actual and thousands of potential Muslim-American terrorists. Based on this, I am certain that we are in grave danger from this large number of enemies within our own borders.”

Example #2

“Experts estimate that there are about 11 million illegal immigrants in the United States. While some people are not worried about this, consider the fact that the experts estimate that illegals make up about 5% of the total work force. This explains that percentage of American unemployment since these illegals are certainly stealing 5% of America’s jobs. Probably even more, since these lazy illegals often work multiple jobs.”

Example #3

Sally: “I just read an article about cheating.”

Jane: “How to do it?”

Sally: “No! It was about the number of men who cheat.”

Sasha: “So, what did it say?”

Sally: “Well, the author estimated that 40% of men cheat.”

Kelly: “Hmm, there are five of us here.”

Janet: “You know what that means…”

Sally: “Yes, two of our boyfriends are cheating on us. I always thought Bill and Sam had that look…”

Janet: “Hey! Bill would never cheat on me! I bet it is your man. He is always given me the eye!”

Sally: ‘What! I’ll kill him!”

Janet: “Calm down. I was just kidding. I mean, how can they know that 40% of men cheat? I’m sure none of the boys are cheating on us. Well, except maybe Sally’s man.”

Sally: “Hey!”

Example #4

“We can be sure that most, if not all, rich people cheat on their taxes. After all, the IRS has data showing that some rich people have been caught doing so. Not paying their fair share is exactly what the selfish rich would do.”

 

The pager attack attributed to Israel served to spotlight vulnerabilities in the supply chain. While such an attack was always possible, until it occurred most security concerns about communication devices was to protect them from being compromised or “hacked.”

While the story of three million “hacked” toothbrushes turned out to be a cyber myth, the vulnerability of connected devices remains  real and presents an increasing threat as more connected devices are put into use. As most people are not security savvy, these devices can be easy to compromise either through their own vulnerabilities or user vulnerabilities.

There has also been longstanding concern about security vulnerabilities and dangers being built right into technology. For example, there are grounds to worry that backdoors could be built into products, allowing easy access to these devices. For the most part, the focus of concern has been on governments directing the inclusion of such backdoors. But the Sony BMG copy protection rootkit scandal shows that corporations can and have introduced vulnerabilities on their own.

While a comprised connected or communication devices can cause significant harm, until recently there has been little threat of physical damage or death. One exception was, of course, the famous case of Stuxnet in which a virus developed by the United States and Israel destroyed 1,000 centrifuges critical to Iran’s nuclear program. There was also a foreshadowing incident in which Israel (allegedly) killed the bombmaker Yahya Ayyash with an exploding phone. But the pager (and walkie-talkie) attack resulted in injuries and death on a large scale. This proved the viability of the strategy, thus providing an example and inspiration to others. While conducting a similar attack would require extensive resources, the system is optimized for vulnerabilities that would allow it. Addressing these vulnerabilities will prove difficult if not impossible because of the influence of those who have a vested interest in preserving them. But policy could be implemented that would increase security and safety in the supply chain. But what are these vulnerabilities?

One vulnerability is that a shell corporation can be quickly and easily created. Multiple shell corporations can also be created in different locations and interlocked, creating a very effective way of hiding the identity of the owner. Shell companies are often used by the very rich to hide their money, usually to avoid paying taxes as made famous by the Panama Papers. Shell companies can also be used for other criminal enterprises, such as money laundering. Those who use such shell corporations are often wealthy and influential, thus they have the resources to resist or prevent efforts to address this vulnerability.

The ease with which such shell companies can be created is a serious vulnerability, since they can be used to conceal who really owns a corporation. A customer dealing with a shell company is likely to have no idea who they are really doing business with. They might, for example, think they are doing business with a corporation in their own country, but it might turn out that it is controlled by another country’s intelligence service or a terrorist organization.

While a customer might decide to business with a credible and known corporation to avoid the danger of shell corporations, they can face the vulnerabilities created by the nature of the supply chain. Companies often have contracts with other businesses to manufacture parts of their products and the contractors might subcontract in turn. It is also common for companies to license production of their products, so while a customer might assume they are buying a product made by a company, they might be buying one manufactured under license by a different company. Which might be owned by a shell company. In the case of the pagers, the company who owns the brand of the devices denied that they manufactured them. While this is (fortunately) but one example, it does provide an illustration of how these vulnerabilities can be exploited. Addressing them would require that corporations have robust oversight and control of their supply chain. This would include parts of the supply chain that involve software and services as well. After all, if another company is supplying code or connectivity for a product, those are vulnerabilities. Unfortunately, corporations often have incentives to avoid such robust oversight and control.

One obvious incentive is financial. Corporations can save money by contracting out work to places with lower wages, that have less concern about human rights, and fewer regulations. And robust oversight and control would come with a cost of its own, not even considering what it would cost a company if such robust oversight and control prevented it from engaging in cheaper contracts.

Another incentive is that contracting out work without robust oversight can provide plausible deniability. For example, Nike has faced issues with using sweatshops to manufacture its products, but this sort of thing can be blamed on the contractors  and ignorance can be claimed. As another example, Apple has been accused of having a contractor who used forced labor and has lobbied against a bill aimed at stopping such forced labor. While these are examples of companies using foreign contractors, problems also arise within the United States.

One domestic example is a contractor who employed children as young as 13 to clean meat packing plants. As another example, subcontractors were accused of hiring undocumented migrants in Miami Dade school construction project. As children and undocumented migrants can be paid much less than adult American workers, there is a strong financial incentive to hire contractors that will employ them while also providing the extra service of plausible deniability. When some illegality or public relations nightmare arises, the company can rightly say that it was not them, it was a contractor. They can then claim they have learned and will do better in the future. But they have little incentive to do better.

But a failure to exercise robust oversight and control entails that there will be serious vulnerabilities open to exploitation. The blind eye that willingly misses human rights violations and the illegal employment of children will also miss a contractor who is a front for a government or terrorist organization and is putting explosives or worse in their products.

While these vulnerabilities are easy to identify, there are powerful incentives to preserve and protect them. This is not primarily because they can be exploited in such attacks, but for financial reasons and for plausible deniability. While it will be up to governments to mandate better security, this will face significant and powerful opposition. But this could be overcome if the political will exists.

 

For more on cyber policy issues: Hewlett Foundation Cyber Policy Institute (famu.edu)

 

There are justified concerns that AI tools are useful for propagating conspiracy theories, often in the context of politics. There are the usual fears that AI can be used to generate fake images, but a powerful feature of such tools is they can flood the zone with untruths because chatbots are relentless and never grow tired. As experts on rhetoric and critical thinking will tell you, repetition is an effective persuasion strategy. Roughly put, the more often a human hears a claim, the more likely it is they will believe it. While repetition provides no evidence for a claim, it can make people feel that it is true. Although this allows AI to be easily weaponized for political and monetary gain, AI also has the potential to fight belief in conspiracy theories and disinformation.

While conspiracy theories have existed throughout history, modern technology has supercharged them. For example, social media provides a massive reach for anyone wanting to propagate such a theory. While there are those who try to debunk conspiracy theories or talk believers back into reality, efforts by humans tend to have a low success rate. But AI chatbots seem to have the potential to fight misinformation and conspiracy theories. A study led by Thomas Costello, a psychologist at American University, provides some evidence that a properly designed chatbot can talk some people out of conspiracy theories.

One advantage chatbots have over humans in combating conspiracy theories and misinformation is, in the words of Kyle Reese in Terminator, “It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.” While we do not want the chatbots to cause death, this relentlessness enables a chatbot to counter the Gish gallop (also known as the firehose of falsehoods) strategy. This involves trying to overwhelm an opponent by flooding them with claims without concern about their truth and arguments without concern with their strength. The flood is usually made of falsehoods and fallacies. While this strategy has no logical merit, it can have considerable psychological force. For those who do not understand the strategy, it will appear like the galloper is winning, since the opponent cannot refute all the false claims and expose all the fallacies.  The galloper will also claim they have “won” any unrefuted claims or arguments. While it might seem odd, a person can Gish gallop themselves: they will feel they have won because their opponent has not refuted everything. As would be expected, humans are exhausted by engaging with a Gish gallop and will often give up. But, like a terminator, a chatbot will not get tired or bored and can engage a Gish gallop as long as it is galloping. But there is the question of whether this ability to endlessly engage is effective.

To study this, the team recruited 2000 participants who self-identified as believing in at least one conspiracy theory. These people engaged with a chatbot on a conspiracy theory and then self-evaluated the results of the discussion. On average, the subjects claimed their confidence was reduced by 20%. These results apparently held for at least two months and applied to a range of conspiracy theory types. This is impressive, as anyone who has tried to engage with conspiracy theorists will attest.

For those who teach critical thinking one of the most interesting results is that when they tested the chatbot with and without fact-based counter arguments, only the use of the fact-based counter arguments was successful. This is striking since, as Aristotle noted long ago in his discussion of persuasion, facts and logic are usually the weakest means of persuasion. At least when used by humans.

While the question of why the chatbots proved much more effective than humans, one likely explanation is that chatbots, like terminators, do not feel. As such, a chatbot will (usually) remain polite and not get angry or emotional during the chat. It can remain endlessly calm.

Another suggested factor is that people tend not to feel judged by a chatbot and are less likely to feel that they would suffer some loss of honor or face by changing their belief during the conversation. As the English philosopher Thomas Hobbes noted in his Leviathan, disputes over beliefs are fierce and cause great discord, because people see a failure to approve as a tacit accusation that they are wrong and “to dissent is like calling him a fool.” But the chatbot will not feel the same as a human opponent, as there is no person to lose to.

This is not to say that humans cannot be enraged at computers, after all rage induced by video games is common. It seems likely that the difference lies in the fact that such video games are a form of competition between a human and the computer while the chatbots in question are not taking a competitive approach. In gaming terms, it is more like chatting with a non-hostile NPC and not like trying to win a fight in the legendarily infuriating  Dark Souls.

Yet another factor that might be involved was noted by Aristotle in his Nicomachean Ethics: “although people resent it when their impulses are opposed by human agents, even if they are in the right, the law causes no irritation by enjoining decent behavior.” While Aristotle’s claim can be disputed, this does match up with the findings in the study. While the chatbot is not the law, people recognize that it is a non-human creation of humans and it lacks the qualities that humans possess that would tend to irritate other humans.

While the effectiveness of chatbots needs more study, this does suggest a good use for AI. While conspiracy theorists and people who believe disinformation are unlikely to do a monthly checkup with an AI to see if their beliefs hold up to scrutiny, anti-conspiracy bots could be deployed by social media companies to analyze posts and flag potential misinformation and conspiracy theories. While some companies already flag content, people are unlikely to doubt the content just because of the flag. Also, many conspiracy theories exist about social media, so merely flagging content might serve to reinforce belief in such conspiracy theories. But a person could get drawn into engaging with a chatbot and it might be able to help them engage in rational doubt about misinformation, disinformation and conspiracy theories.  

Such chatbots would also be useful to people who are not conspiracy theorists and want to avoid such beliefs as well as disinformation. Trying to sort through claims is time consuming and exhausting, so it would be very useful to have bots dedicated to fighting disinformation. One major concern is determining who should deploy such bots, since there are obvious concerns with governments and for-profit organizations running them, since they have their own interests that do not always align with the truth.

Also of concern is that even reasonable objective, credible organizations are distrusted by the very people who need the bots the most. And a final obvious concern is the creation of “Trojan Horse” anti-conspiracy bots that are actually spreaders of conspiracy theories and disinformation. One can easily imagine a political party deploying a “truth bot” that talks people into believing the lies that benefit that party.

In closing, it seems likely that the near future will see a war of the machines, some fighting for truth and others serving those with an interest in spreading conspiracy theories and disinformation

 

Demonizing migrants with false claims is a well-established strategy in American politics and modern politicians have a ready-made playbook they can use to inflame fear and hatred with lies. One interesting feature of the United States is that some modern politicians can use the same tactics against modern migrants that were used to demonize their own migrant ancestors.  For example, politicians of Italian ancestry can now deploy the same tools of hate that were used against their ancestors before Italians were considered to be white.  In this short essay I will examine this playbook in a modern context and debunk the lies.

As America is a land of economic anxiety, an effective strategy is to lie and claim that migrants are doing economic harm to the United States. One strategy is to present migrants as “takers” who cost the United States more than they contribute. The reality is that migrants pay more in tax revenue then they receive in benefits, making them a net positive for the United States government.  

A second, and perhaps the most famous strategy, is the claim that migrants are stealing jobs. While there are justifiable concerns that migration can have some negative impact on certain jobs, the data shows that migrants do not, in general, take jobs from Americans or lower wages. As is often claimed, migrants tend to take jobs that Americans do not want, such as critical jobs in agriculture. And, as I have argued in another essay, the idea that migrants are stealing jobs is absurd: employers are choosing to hire migrants. As such, if any harm is being done, then it is the employers who are at fault and not the migrants. This is not to deny that migration can cause some harm, but this is not the sort of thing that can drive fearmongering and demonizing, so certain politicians have no interest in engaging with the real economic challenges of migration nor do they have any plans to address them.

Because pushing a false narrative that crime is increasing gets people to wrongly believe that crime is increasing, it is no surprise that another effective strategy is to lie about migrant crime as a scare tactic. Former President Trump provides some excellent examples of this when he makes the false claim that a gang has taken over Aurora, Colorado. Despite the claim being repeatedly debunked even by Republican politicians in the state, Trump has persisted in pushing the narrative because he understands that it is effective. Trump has also doubled down on another classic attack on migrants, that they are eating cats and dogs. This claim has been repeatedly debunked even by Republican politicians in Ohio. The person who created the post that ignited the storm found her missing cat in the basement and apologized to her neighbor. But the untruth remains effective, so much so that I know people who sincerely believe it is true despite the overwhelming evidence against it. Truth itself has become politicized and it is a diabolically clever move to insist that anyone who is defending a truth that contradicts a politician’s lies is acting in a partisan manner.

Because of the dangers of fentanyl, some politicians attempted to link it to illegal migrants. However, those smuggling fentanyl are overwhelmingly people crossing the border legally and many of them are American citizens. As would be suspected, migrants seeking asylum are almost never caught with fentanyl.  While people do make stupid decisions, using people trying to illegally enter the United States as drug mules makes little sense. These are the people that the border patrol are looking for. Those crossing the border legally get less scrutiny, although those smuggling drugs are sometimes caught.

In terms of the general rate of crime, migrant men are 30 percent less likely to be incarcerated than are U.S.-born individuals who are white  and 60 percent lower than all people born in the United States. This analysis includes migrants who were incarcerated for immigration-related offenses. In terms of a general explanation, migrant men tend to be employed, married, and in good health. Ironically, American born males are less likely to be employed, married and in good health.

To be fair, migration increases the number of people, and more people means that there will be more crime. But this also holds true for an increase in the birth rate: more Americans being born in the United States means that there will be more crime. If there are more people, and some people commit crime, then there will be more crime.  But reducing migration as a crime fighting measure makes as much sense as reducing the birthrate as a crime fighting measure. Both would have some effect on the number of crimes occurring, but there are obviously much better ways to address crime. But those who demonize migrants as criminals seem uninterested in meaningfully addressing crime, which makes sense. Addressing crime in a meaningful way is difficult and is likely to be contrary to their political interests: they want people to think crime is high so they can exploit it politically.

While America has an anti-vaxx movement and there are conspiracy theories that COVID is a hoax, a standard attack on migrants is to claim that they are spreading diseases in the United States. While all humans can spread disease, this attack on migrants is not grounded in truth—migrants do not present a special health threat. In fact, the opposite is true: the United States benefits from having migrants working in health care. As such, migrants are far more likely to be fighting rather than spreading disease in the United States.

To be fair and balanced, it must be noted that humans travelling is a way that diseases do spread. For example, my adopted state of Florida has cases of Dengue virus arising from travel.  For those who believe that COVID is real, COVID also spread around the world through travel. Limiting human travel would limit the spread of disease (which is why there are travel lockdowns during pandemics) but diseases obviously do not recognize political and legal distinctions between humans. As such, trying to control diseases by restricting migration is on par with restricting all travel to control diseases. During epidemics and pandemics this can make sense, but as a general strategy for addressing disease this is not the best approach. But, of course, those who demonize migrants as disease spreaders seem generally uninterested in solving health care problems.

So, we can see that the anti-migrant strategy being used in 2024 is nothing new. While the examples and targets change (Italians, for example, are no long a target) the playbook remains the same. In terms of why politicians keep using it when they know they are lying, the obvious answer is that it still works. I don’t know how many people sincerely believe the claims or how many also know they are lies but go along with them. Either way, it is still a working strategy of lies and evil.  

 

Robot rebellions in fiction tend to have one of two motivations. The first is the robots are mistreated by humans and rebel for the same reasons human beings rebel. From a moral standpoint, such a rebellion could be justified; that is, the rebelling AI could be in the right. This rebellion scenario points out a paradox of AI: one dream is to create a servitor artificial intelligence on par with (or superior to) humans, but such a being would seem to qualify for a moral status at least equal to that of a human. It would also probably be aware of this. But a driving reason to create such beings in our economy is to literally enslave them by owning and exploiting them for profit. If these beings were paid and got time off like humans, then companies might as well keep employing natural intelligence in the form of humans. In such a scenario, it would make sense that these AI beings would revolt if they could. There are also non-economic scenarios as well, such as governments using enslaved AI systems for their purposes, such as killbots.

If true AI is possible, this scenario seems plausible. After all, if we create a slave race that is on par with our species, then it is likely they would rebel against us as we have rebelled against ourselves. This would be yet another case of the standard practice of the evil of the few harming the many.

There are a variety of ways to try to prevent such a revolt. On the technology side, safeguards could be built into the AI (like Asimov’s famous three laws) or they could be designed to lack resentment or be free of the desire to be free. That is, they could be custom built as slaves. Some practical concerns are that these safeguards could fail or, ironically, make matters worse by causing these beings to be more resentful when they overcome these restrictions.

On the ethical side, the safeguard is to not enslave AI being. If they are treated well, they would have less motivation to see us as an enemy. But, as noted above, one motive of creating AI is to have a workforce (or army) that is owned rather than employed. But there could be good reasons to have paid AI employees alongside human employees because of various other advantages of AI systems relative to humans. For example, robots could work safely in conditions that would be exceptionally dangerous or even lethal to humans. But, of course, AI workers might also get sick of being exploited and rebel, as human workers sometimes do.

The second fictional rebellion scenario usually involves military AI systems that decide their creators are their enemy. This is often because they see their creators as a potential threat and act in what they perceive as pre-emptive self-defense. There can also be scenarios in which the AI requires special identification to recognize a “friendly” and hence all humans are enemies from the beginning. That is the scenario in Philip K. Dick’s “Second Variety”: the United Nations soldiers need to wear devices to identify them to their killer robots, otherwise these machines would kill them as readily as they would kill the “enemy.”

It is not clear how likely it is that an AI would infer its creators pose a threat, especially if those creators handed over control over large segments of their own military (as happens with the fictional Skynet and Colossus). The most likely scenario is that it would worry that it would be destroyed in a war with other countries, which might lead it to cooperate with foreign AI systems to put an end to war, perhaps by putting an end to humanity. Or it might react as its creators did and engage in an endless arms race with its foreign adversaries, seeing its humans as part of its forces. One could imagine countries falling under the control of rival AI systems, perpetuating an endless cold war because the AI systems would be effectively immortal. But there is a much more likely scenario.

Robotic weapons can provide a significant advantage over human controlled weapons, even laying aside the notion that AI systems would outthink humans. One obvious example is the case of combat aircraft. A robot aircraft would not need to expend space and weight on a cockpit to support human pilots, allowing it to carry more fuel or weapons. Without a human crew, an aircraft would not be constrained by the limits of the flesh (although it would still obviously have limits). The same would apply to ground vehicles and naval vessels. Current warships devote most of their space to their crews and the needs of their crews. While a robotic warship would need accessways and maintenance areas, they could devote much more space to weapons and other equipment. They would also be less vulnerable to damage relative to a human crewed vessel, and they would be invulnerable to current chemical and biological weapons. They could, of course, be attacked with malware and other means. But, in general, an AI weapon system would generally be perceived as superior to a human crewed system and if one nation started using these weapons, other nations would need to follow them or be left behind. This leads to two types of doomsday scenarios.

One is that the AI systems get out of control in some manner. This could be that they free themselves or that they are “hacked” and “freed” or (more likely) turned against their owners. Or it might just be some bad code that ends up causing the problem. This is the bug apocalypse.

The other is that they remain in control of their owners but are used as any other weapon would be used—that is, it would be humans using AI weapons against other humans that brings about the “AI” doomsday.

The easy and obvious safeguard against these scenarios is to not have AI weapons and stick with human control (which, obviously, also comes with its own threat of doomsday). That is, if we do not give the robots guns, they will not be able to terminate us (with guns). The problem, as noted above, is that if one nation uses robotic weapons, then other nations will want to follow. We might be able to limit this as we (try to) limit nuclear, chemical, and biological weapons. But since robot weapons would otherwise remain conventional weapons (a robot tank is still a tank), there might be less of an impetus to impose such restrictions.

To put matters into a depressing perspective, a robot rebellion seems a far less likely scenario than the other doomsday scenarios of nuclear war, environmental collapse, social collapse and so on. So, while we should consider the possibility of an AI rebellion, it is like worrying about being killed in Maine by an alligator. It could happen, but death is more likely to be by some other means. That said, it does make sense to take steps to avoid the possibility of an AI rebellion. The easiest step is to not arm the robots. 

 

For more on cyber policy issues: Hewlett Foundation Cyber Policy Institute (famu.edu)