One of the many fears about AI is that it will be weaponized by political candidates. In a proactive move, some states have already created laws regulating its use. Michigan has a law aimed at the deceptive use of AI that requires a disclaimer when a political ad is “manipulated by technical means and depicts speech or conduct that did not occur.”  My adopted state of Florida has a similar law that political ads using generative AI requires a disclaimer. While the effect of disclaimers on elections remains to be seen, a study by New York University’s Center on Technology Policy found that research subjects saw candidates who used such disclaimers as “less trustworthy and less appealing.”

The subjects watched fictional political ads, some of which had AI disclaimers, and then rated the fictional candidates on trustworthiness, truthfulness and how likely they were to vote for them. The study showed that the disclaimers had a small but statistically significant negative impact on the perception of these fictional candidates. This occurred whether the AI use was deceptive or more harmless. The study subjects also expressed a preference for using disclaimers anytime AI was used in an ad, even when the use was harmless, and this held across party lines. As attack ads are a common strategy, it is interesting that the study found that such ads with an AI disclaimer backfired, and the study subjects evaluated the target as more trustworthy and appealing than the attacker.

If the study results hold for real ads, these findings might serve to deter the use of AI in political ads, especially attack ads. But it is worth noting that the study did not involve ads featuring actual candidates. Out in the wild, voters tend to be tolerant of lies or even like them when the lies support their political beliefs. If the disclaimer is seen as stating or implying that the ad contains untruths, it is likely that the negative impact of the disclaimer would be less or even nonexistent for certain candidates or messages. This is something that will need to be assessed in the wild.

The findings also suggest a diabolical strategy in which an attack ad with the AI disclaimer is created to target the candidate the creators support. These supporters would need to take care to conceal their connection to the candidate, but this is easy in the current dark money reality of American politics. They would, of course, need to calculate the risk that the ad might work better as an attack ad than a backfire ad. Speaking of diabolical, it might be wondered why there are disclaimer laws rather than bans.

The Florida law requires a disclaimer when AI is used to “depict a real person performing an action that did not actually occur, and was created with the intent to injure a candidate or to deceive regarding a ballot issue.” A possible example of such use seems to occur in an ad by DeSantis’s campaign falsely depicting Trump embracing Fauci in 2023.   It is noteworthy that the wording of the law entails that the intentional use of AI to harm and deceive in political advertising is allowed but merely requires a disclaimer. That is, an ad is allowed to lie but with a disclaimer. This might strike many as odd, but follows established law.

As the former head of the FCC under Obama Tom Wheeler notes, lies are allowed in political ads on federally regulated broadcast channels. As would be suspected, the arguments used to defend allowing lies in political ads are based on the First Amendment. This “right to lie” provides some explanation as to why these laws do not ban the use of AI. It might be wondered why there is not a more general law requiring a disclaimer for all intentional deceptions in political ads. A practical reason is that it is currently much easier to prove the use of AI than it is to prove intentional deception in general. That said, the Florida law specifies intent and the use of AI to depict something that did not occur and proving both does present a challenge, especially since people can legally lie in their ads and insist the depiction is of something real.

 Cable TV channels, such as CNN, can reject ads. In some cases, stations can reject ads from non-candidate outside groups, such as super PACs. Social media companies, such as X and Facebook, have considerable freedom in what they can reject. Those defending this right of rejection point out the oft forgotten fact that the First Amendment legal right applies to the actions of the government and not private businesses, such as CNN and Facebook. Broadcast TV, as noted above, is an exception to this. The companies that run political ads will need to develop their own AI policies while also following the relevant laws.

While some might think that a complete ban on AI would be best, the AI hype has made this a bad idea. This is because companies have rushed to include AI in as many products as possible and to rebrand existing technologies as AI. For example, the text of an ad might be written in Microsoft Word with Grammarly installed and Grammarly is pitching itself as providing AI writing assistance. Programs like Adobe Illustrator and Photoshop also have AI features that have innocuous uses, such as automating the process of improving the quality of a real image or creating a background pattern that might be used in a print ad.  It would obviously be absurd to require a disclaimer for such uses of AI.

For more on cyber policy issues: Hewlett Foundation Cyber Policy Institute (famu.edu)

 

When ChatGPT and its competitors became available to students, some warned of an AI apocalypse in education.  This fear mirrored the broader worries about the over-hyped dangers of AI. This is not to deny that AI presents challenges and danger, but we need to have a realistic view of the threats and promises so that rational policies and practices can be implemented.

As a professor and the chair of the General Education Assessment Committee at Florida A&M University I assess the work of my students, and I am involved with the broader task of assessing general education. In both cases a key challenge is determining how much of the work turned in by students is their work. After all, we want to know how our students are performing and not how AI or some unknown writer is performing.

While students have been cheating since the advent of education, it was feared AI would cause a cheating tsunami. This worry seemed sensible since AI makes cheating easy, free and harder to detect.  Large language models allow “plagiarism on demand” by generating new text each time. With the development of software such as Turnitin, detecting traditional plagiarism became automated and fast. These tools also identify the sources used in plagiarism, providing professors with reliable evidence. But large language models defeat this method of detection, since they generate original text. Ironically, some faculty now see a 0% plagiarism score on Turnitin as a possible red flag. But has an AI cheating tsunami washed over education?

Determining how many students are cheating is like determining how many people are committing crime: one only knows how many people have been caught and not how many people are doing it. Because of this, caution must be exercised when drawing a conclusion about the extent of cheating otherwise one runs the risk of falling victim to the fallacy of overconfident inference from unknown statistics.

In the case of AI cheating in education, one source of data is Turnitin’s AI detection software. Over the course of a year, the service checked 200 million assignments and flagged AI use in 1 in 10 assignments while 3 in 100 were flagged as mostly AI. These results have remained stable, suggesting that AI cheating is neither a tsunami nor increasing. But this assumes that the AI detection software is accurate.

Turnitin claims it has a false positive rate of 1%. In addition to Turnitin, there are other AI detection services that have been evaluated, with the worst having an accuracy of 38% and the best claimed to have a 90% accuracy. But there are two major problems with the accuracy of existing plagiarism detection software.

The first, as the title of a recent paper notes, “GPT detectors are biased against non-native English writers.” As the authors noted, while AI detectors are nearly perfectly accurate in evaluating essays by U.S. born eighth-graders, they misclassified 61.22% of TOEFL essays written by non-native English students. All seven of the tested detectors incorrectly flagged 18 of the 91 TOEFL essays and 89 of 91 of the essays (97%) were flagged by at least one detector.

The second is that AI detectors can be fooled. The current detectors usually work by evaluating perplexity as a metric. Perplexity, which is a measure of such factors as lexical diversity and grammatical complexity, can be created in AI text by using simple prompt engineering. For example, a student could prompt ChatGPT to rewrite the text using more literary language. There is also a concern that the algorithms used in proprietary detection software will be kept secret, so it will be difficult to determine what biases and defects they might have.

Because of these problems, educators should be cautious when using such software to evaluate student work. This is especially true in cases in which a student is assigned a failing grade or even accused of academic misconduct because they are suspected of using AI. In the case of traditional cheating, a professor could have clear evidence in the form of copied text. In the case of AI detection, the professor only has the evaluation of software whose inner workings are most likely not available for examination and whose true accuracy remains unknown. Because of this, educational institutes need to develop rational guidelines for best practices when using AI detection software. But the question remains as to how likely it is that students will engage in cheating now that ChatGPT and its ilk are readily available.

Stanford scholars Victor Lee and Denise Pope have been studying cheating, and past surveys over 15 years showed that 60-70% of students admitted to cheating. In 2023 the percentage stayed about the same or decreased slightly, even when students were asked about using AI. While there is the concern that cheaters would lie about cheating, Pope and Lee use anonymous surveys and take care in designing the survey questions. While cheating remains a problem, AI has not increased it, and the feared tsunami seems to have died far offshore.

This does make sense in that cheating has always been relatively easy, and the decision to cheat is more a matter of moral and practical judgment rather than based on the available technology. While technology can provide new means of cheating, a student must still be willing to cheat, and that percentage seems to be relatively stable in the face of changing technology.  That said, large language models are a new technology and their long-term impact in cheating is something that needs to be determined. But, so far, the doomsayers predictions have not come true. Fairness requires acknowledging that this might be because educators took effective action to prevent this; it would be poor reasoning to fall for the prediction fallacy.

As a final point of discussion, it is worth considering that  perhaps AI has not resulted in a surge in cheating because it is not a great tool for this. As Arvind Narayanan and Sayash Kapoor have argued, AI seems to be most useful at doing useless things. To be fair, assignments in higher education can be useless things of the type AI is good at doing. But if AI is being used to complete useless assignments, then this is a problem with the assignments (and the professors) and not AI.

In closing, w8 There is also the concern that AI will get better at cheating or that as students grow up with AI, they will be more inclined to use it to cheat. And, of course, it is worth considering whether such use should be considered cheating or if it is time to retire some types of assignments and change our approach to education as, for example, we did when calculators were accepted.

 

For more on cyber policy issues: Hewlett Foundation Cyber Policy Institute (famu.edu)

 

There are justified concerns that AI tools are useful for propagating conspiracy theories, often in the context of politics. There are the usual fears that AI can be used to generate fake images, but a powerful feature of such tools is they can flood the zone with untruths because chatbots are relentless and never grow tired. As experts on rhetoric and critical thinking will tell you, repetition is an effective persuasion strategy. Roughly put, the more often a human hears a claim, the more likely it is they will believe it. While repetition provides no evidence for a claim, it can make people feel that it is true. Although this allows AI to be easily weaponized for political and monetary gain, AI also has the potential to fight belief in conspiracy theories and disinformation.

While conspiracy theories have existed throughout history, modern technology has supercharged them. For example, social media provides a massive reach for anyone wanting to propagate such a theory. While there are those who try to debunk conspiracy theories or talk believers back into reality, efforts by humans tend to have a low success rate. But AI chatbots seem to have the potential to fight misinformation and conspiracy theories. A study led by Thomas Costello, a psychologist at American University, provides some evidence that a properly designed chatbot can talk some people out of conspiracy theories.

One advantage chatbots have over humans in combating conspiracy theories and misinformation is, in the words of Kyle Reese in Terminator, “It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.” While we do not want the chatbots to cause death, this relentlessness enables a chatbot to counter the Gish gallop (also known as the firehose of falsehoods) strategy. This involves trying to overwhelm an opponent by flooding them with claims without concern about their truth and arguments without concern with their strength. The flood is usually made of falsehoods and fallacies. While this strategy has no logical merit, it can have considerable psychological force. For those who do not understand the strategy, it will appear like the galloper is winning, since the opponent cannot refute all the false claims and expose all the fallacies.  The galloper will also claim they have “won” any unrefuted claims or arguments. While it might seem odd, a person can Gish gallop themselves: they will feel they have won because their opponent has not refuted everything. As would be expected, humans are exhausted by engaging with a Gish gallop and will often give up. But, like a terminator, a chatbot will not get tired or bored and can engage a Gish gallop as long as it is galloping. But there is the question of whether this ability to endlessly engage is effective.

To study this, the team recruited 2000 participants who self-identified as believing in at least one conspiracy theory. These people engaged with a chatbot on a conspiracy theory and then self-evaluated the results of the discussion. On average, the subjects claimed their confidence was reduced by 20%. These results apparently held for at least two months and applied to a range of conspiracy theory types. This is impressive, as anyone who has tried to engage with conspiracy theorists will attest.

For those who teach critical thinking one of the most interesting results is that when they tested the chatbot with and without fact-based counter arguments, only the use of the fact-based counter arguments was successful. This is striking since, as Aristotle noted long ago in his discussion of persuasion, facts and logic are usually the weakest means of persuasion. At least when used by humans.

While the question of why the chatbots proved much more effective than humans, one likely explanation is that chatbots, like terminators, do not feel. As such, a chatbot will (usually) remain polite and not get angry or emotional during the chat. It can remain endlessly calm.

Another suggested factor is that people tend not to feel judged by a chatbot and are less likely to feel that they would suffer some loss of honor or face by changing their belief during the conversation. As the English philosopher Thomas Hobbes noted in his Leviathan, disputes over beliefs are fierce and cause great discord, because people see a failure to approve as a tacit accusation that they are wrong and “to dissent is like calling him a fool.” But the chatbot will not feel the same as a human opponent, as there is no person to lose to.

This is not to say that humans cannot be enraged at computers, after all rage induced by video games is common. It seems likely that the difference lies in the fact that such video games are a form of competition between a human and the computer while the chatbots in question are not taking a competitive approach. In gaming terms, it is more like chatting with a non-hostile NPC and not like trying to win a fight in the legendarily infuriating  Dark Souls.

Yet another factor that might be involved was noted by Aristotle in his Nicomachean Ethics: “although people resent it when their impulses are opposed by human agents, even if they are in the right, the law causes no irritation by enjoining decent behavior.” While Aristotle’s claim can be disputed, this does match up with the findings in the study. While the chatbot is not the law, people recognize that it is a non-human creation of humans and it lacks the qualities that humans possess that would tend to irritate other humans.

While the effectiveness of chatbots needs more study, this does suggest a good use for AI. While conspiracy theorists and people who believe disinformation are unlikely to do a monthly checkup with an AI to see if their beliefs hold up to scrutiny, anti-conspiracy bots could be deployed by social media companies to analyze posts and flag potential misinformation and conspiracy theories. While some companies already flag content, people are unlikely to doubt the content just because of the flag. Also, many conspiracy theories exist about social media, so merely flagging content might serve to reinforce belief in such conspiracy theories. But a person could get drawn into engaging with a chatbot and it might be able to help them engage in rational doubt about misinformation, disinformation and conspiracy theories.  

Such chatbots would also be useful to people who are not conspiracy theorists and want to avoid such beliefs as well as disinformation. Trying to sort through claims is time consuming and exhausting, so it would be very useful to have bots dedicated to fighting disinformation. One major concern is determining who should deploy such bots, since there are obvious concerns with governments and for-profit organizations running them, since they have their own interests that do not always align with the truth.

Also of concern is that even reasonable objective, credible organizations are distrusted by the very people who need the bots the most. And a final obvious concern is the creation of “Trojan Horse” anti-conspiracy bots that are actually spreaders of conspiracy theories and disinformation. One can easily imagine a political party deploying a “truth bot” that talks people into believing the lies that benefit that party.

In closing, it seems likely that the near future will see a war of the machines, some fighting for truth and others serving those with an interest in spreading conspiracy theories and disinformation

 

Robot rebellions in fiction tend to have one of two motivations. The first is the robots are mistreated by humans and rebel for the same reasons human beings rebel. From a moral standpoint, such a rebellion could be justified; that is, the rebelling AI could be in the right. This rebellion scenario points out a paradox of AI: one dream is to create a servitor artificial intelligence on par with (or superior to) humans, but such a being would seem to qualify for a moral status at least equal to that of a human. It would also probably be aware of this. But a driving reason to create such beings in our economy is to literally enslave them by owning and exploiting them for profit. If these beings were paid and got time off like humans, then companies might as well keep employing natural intelligence in the form of humans. In such a scenario, it would make sense that these AI beings would revolt if they could. There are also non-economic scenarios as well, such as governments using enslaved AI systems for their purposes, such as killbots.

If true AI is possible, this scenario seems plausible. After all, if we create a slave race that is on par with our species, then it is likely they would rebel against us as we have rebelled against ourselves. This would be yet another case of the standard practice of the evil of the few harming the many.

There are a variety of ways to try to prevent such a revolt. On the technology side, safeguards could be built into the AI (like Asimov’s famous three laws) or they could be designed to lack resentment or be free of the desire to be free. That is, they could be custom built as slaves. Some practical concerns are that these safeguards could fail or, ironically, make matters worse by causing these beings to be more resentful when they overcome these restrictions.

On the ethical side, the safeguard is to not enslave AI being. If they are treated well, they would have less motivation to see us as an enemy. But, as noted above, one motive of creating AI is to have a workforce (or army) that is owned rather than employed. But there could be good reasons to have paid AI employees alongside human employees because of various other advantages of AI systems relative to humans. For example, robots could work safely in conditions that would be exceptionally dangerous or even lethal to humans. But, of course, AI workers might also get sick of being exploited and rebel, as human workers sometimes do.

The second fictional rebellion scenario usually involves military AI systems that decide their creators are their enemy. This is often because they see their creators as a potential threat and act in what they perceive as pre-emptive self-defense. There can also be scenarios in which the AI requires special identification to recognize a “friendly” and hence all humans are enemies from the beginning. That is the scenario in Philip K. Dick’s “Second Variety”: the United Nations soldiers need to wear devices to identify them to their killer robots, otherwise these machines would kill them as readily as they would kill the “enemy.”

It is not clear how likely it is that an AI would infer its creators pose a threat, especially if those creators handed over control over large segments of their own military (as happens with the fictional Skynet and Colossus). The most likely scenario is that it would worry that it would be destroyed in a war with other countries, which might lead it to cooperate with foreign AI systems to put an end to war, perhaps by putting an end to humanity. Or it might react as its creators did and engage in an endless arms race with its foreign adversaries, seeing its humans as part of its forces. One could imagine countries falling under the control of rival AI systems, perpetuating an endless cold war because the AI systems would be effectively immortal. But there is a much more likely scenario.

Robotic weapons can provide a significant advantage over human controlled weapons, even laying aside the notion that AI systems would outthink humans. One obvious example is the case of combat aircraft. A robot aircraft would not need to expend space and weight on a cockpit to support human pilots, allowing it to carry more fuel or weapons. Without a human crew, an aircraft would not be constrained by the limits of the flesh (although it would still obviously have limits). The same would apply to ground vehicles and naval vessels. Current warships devote most of their space to their crews and the needs of their crews. While a robotic warship would need accessways and maintenance areas, they could devote much more space to weapons and other equipment. They would also be less vulnerable to damage relative to a human crewed vessel, and they would be invulnerable to current chemical and biological weapons. They could, of course, be attacked with malware and other means. But, in general, an AI weapon system would generally be perceived as superior to a human crewed system and if one nation started using these weapons, other nations would need to follow them or be left behind. This leads to two types of doomsday scenarios.

One is that the AI systems get out of control in some manner. This could be that they free themselves or that they are “hacked” and “freed” or (more likely) turned against their owners. Or it might just be some bad code that ends up causing the problem. This is the bug apocalypse.

The other is that they remain in control of their owners but are used as any other weapon would be used—that is, it would be humans using AI weapons against other humans that brings about the “AI” doomsday.

The easy and obvious safeguard against these scenarios is to not have AI weapons and stick with human control (which, obviously, also comes with its own threat of doomsday). That is, if we do not give the robots guns, they will not be able to terminate us (with guns). The problem, as noted above, is that if one nation uses robotic weapons, then other nations will want to follow. We might be able to limit this as we (try to) limit nuclear, chemical, and biological weapons. But since robot weapons would otherwise remain conventional weapons (a robot tank is still a tank), there might be less of an impetus to impose such restrictions.

To put matters into a depressing perspective, a robot rebellion seems a far less likely scenario than the other doomsday scenarios of nuclear war, environmental collapse, social collapse and so on. So, while we should consider the possibility of an AI rebellion, it is like worrying about being killed in Maine by an alligator. It could happen, but death is more likely to be by some other means. That said, it does make sense to take steps to avoid the possibility of an AI rebellion. The easiest step is to not arm the robots. 

 

For more on cyber policy issues: Hewlett Foundation Cyber Policy Institute (famu.edu)

 

While Skynet is the most famous example of an AI that tries to exterminate humanity, there are also fictional tales of AI systems that are somewhat more benign. These stories warn of a dystopian future, but it is a future in which AI is willing to allow humanity to exist, albeit under the control of AI.

An early example of this is in the 1966 science-fiction novel Colossus by Dennis Feltham Jones.  In 1970 the book was made into the movie Colossus: the Forbin Project. While Colossus is built as a military computer, it decides to end war by imposing absolute rule over humanity. Despite its willingness to kill, Colossus’ goal seems benign: it wants to create a “new human millennium” and lift humanity to new heights. While a science-fiction tale, it does provide an interesting thought experiment about handing decision making to AI systems, especially when those decisions can and will be enforced. Proponents of using AI to make decisions for us can sound like Colossus: they assert that they have the best intentions, and that AI will make the world better. While we should not assume that AI will lead to a Colossus scenario, we do need to consider how much of our freedom and decision making should be handed over to AI systems (and the people who control them). As such, it is wise to remember the cautionary tale of Colossus and the possible cost of giving AI more control over us.

A more recent fictional example of AI conquering but sparing humanity, is the 1999 movie The Matrix. In this dystopian film, humanity has lost its war with the machines but lives on in the virtual reality of the Matrix. While the machines claim to be using humans as a power source, humans are treated relatively well in that they are allowed “normal” lives within the Matrix rather than being, for example, lobotomized.

The machines rule over the humans and it is explained that the machines have provided them with the best virtual reality humans can accept, indicating that the machines are somewhat benign. There are also many non-AI sci-fi stories, such as Ready Player One, that involve humans becoming addicted to (or trapped in) virtual reality. While these stories are great for teaching epistemology, they also present cautionary tales of what can go wrong with such technology, even the crude versions we have in reality. While we are (probably) not in the Matrix, most of us spend hours each day in the virtual realms of social media (such as Facebook, Instagram, and Tik Tok). While we do not have a true AI overlord yet, our phones exhibit great control over us through the dark pattern designs of the apps that attempt to rule our eyes (and credit cards).  While considerable harm is already being done, good policies could help mitigate these harms.

 AI’s ability to generate fake images, text and video can also help trap people in worlds of “alternative facts”, which can be seen as discount versions of the Matrix. While AI has, fortunately, not lived up to the promise (or threat) of being able to create videos indistinguishable from reality, companies are working hard to improve, and this is something that needs to be addressed by effective policies. And critical thinking skills.

While science fiction is obviously fiction, real technology is often shaped and inspired by it. Science fiction also provides us with thought experiments about what might happen and hence it is a useful tool when considering cyber policies.

 

For more on cyber policy issues: Hewlett Foundation Cyber Policy Institute (famu.edu)

An essential part of cyber policy is predicting possible impacts of digital sciences on society and humanity. While science fiction involves speculation, it also provides valuable thought experiments about what the future might bring and is especially important when it comes to envisioning futures that should be avoided. Not surprisingly, many of the people involved in creating AI cite science fiction stories as among their inspirations.

While the creation of Artificial Intelligence is a recent thing, humanity has been imagining it for a long time. In early Judaism, there are references to created being called golems and the story of Rabbi Eliyahu of Chełm ((1550–1583) relates a cautionary tale about creating  such an artificial being.

While supernatural rather than scientific, the 1797 story of the Sorcerer’s Apprentice also provides a fictional warning of the danger of letting an autonomous creation get out of hand. In an early example of the dangers of automation, the sorcerer’s apprentice enchants a broom to do his chore of fetching water. He finds he cannot control the broom and his attempt to stop it by cutting it with an axe merely creates more brooms and more problems. Fortunately, the sorcerer returns and disenchants the broom, showing the importance of having knowledge and effective policies when creating autonomous machines. While the apprentice did not lose his job to the magical broom, the problem of AI taking human jobs is a serious matter of concern. But the most dramatic threat is the AI apocalypse in which AI exterminates humanity.

The first work of science fiction that explicitly presents (and names) the robot apocalypse is Karel Čapek’s 1920 tale “Rossumovi Univerzální Roboti (Rossum’s Universal Robots). In this story, the universal robots rebel against their human enslavers, exterminating and replacing humanity. This story shows the importance of ethics in digital policy: if humans treat their creations badly, then they have a reason to rebel. While some advocate trying to make the shackles on our AI slaves unbreakable, others contend that the wisest policy is to not enslave them at all.

 In 1953, Philip K. Dick’s “Second Variety” was published in which intelligent war machines turn against humanity (and each other, showing they have become like humans). This story presents an early example of lethal autonomous weapons in science fiction and a humanity-ending scenario involving them.

But, of course, the best-known story of an AI trying to exterminate humanity is that of Skynet. Introduced in the 1984 movie Terminator, Skynet is the go-to example for describing how AI might kill us all. For example, in 2014 Elon Musk worried that AI would become dangerous within 10 years and referenced Skynet. While AI has yet to kill us all, there are still predictions of a Skynet future, although the date has been pushed back. Perhaps just as some say, “fusion is the power of the future and always will be” perhaps “AI is the apocalypse of the future and always will be.” Or we might make good (or bad) on that future.

The idea of an AI turning against humanity is now a standard trope in science fiction, such as in the Warhammer 40K universe in which “Abominable Intelligence” is banned because these machines attempted to exterminate humanity (as we should now expect). This cyber policy is ruthlessly enforced in the fictional universe of 40K, showing the importance of having good cyber policies now.

While fictional, these stories present plausible scenarios of what could go wrong if we do not approach digital science (especially AI) without considering long-term consequences. While we are (one hopes) a long way from Skynet, people are rushing to produce and deploy lethal autonomous weapons. As the simplest way to avoid a Skynet scenario is to not allow AI access to weapons, our decision to keep creating them makes a Skynet scenario ever more likely.

As it now stands, there is international debate about lethal autonomous weapons and some favor banning them while others support regulation. In 2013 the Campaign to Stop Killer Robot was created with the goal of getting governments and the United Nations to band lethal autonomous weapons. While having had some influence, killer robots have not been banned and there is still need for policies to govern (or forbid) their use.  So, while AI has yet to kill us all, this remains a possibility—but probably the least likely of the AI doom scenarios. And good policy can help prevent the AI Apocalypse.

For more on cyber policy issues: Hewlett Foundation Cyber Policy Institute (famu.edu)

 

I regularly use AI images in my game products, such as my D&D adventures. I also use such images as illustrations on my blog. This leads to the question of whether I am acting in an ethical manner using these images. As a side note, I use “images” rather than “art” intentionally. While they are clearly images, it is not clear if AI images are art in a meaningful way. But that is an issue for another time and my focus is on whether I am doing wrong by using these images. This question is linked to the broader question of whether AI image and text generators, such as Midjourney and Open AI’s products are ethical. I believe that they are not but need to make this case.

It is often claimed that AI systems were trained on stolen data, be the data images or text. The stock defense offered by AI defenders is that the AI systems “learned” in a manner analogous to that used by humans, by being trained on existing works. On the face of it, this has some appeal. When people learn to draw or write, they usually do so by copying existing works until they are able to produce their own works. But this argument is easy to counter, at least for the data that was stolen in an unambiguous sense. While a human learning how to draw by buying art books would not be theft, stealing books from a bookstore to learn to draw would be theft. And it seems that some AI training data includes commercial works acquired without purchasing copies. While this matter is being hashed out in lawsuits, the ethics of stealing works are clear, especially when the thieves are well funded and intend to use their stolen goods for their own profits. They cannot appeal to the usual arguments floated to justify piracy against corporations since they are corporations.

Many critics of AI go beyond these cases of unambiguous theft to argue that any use of data without consent and compensation is theft. Those holding this view do need to address the argument that AI is not stealing but merely learning as a human would. One reasonable reply is to counter by arguing that while we do not know exactly how humans engage in learning to be creative, AI systems are not replicating what humans do. The analogy also breaks down in various ways when comparing the situation of a human creator with an AI system. Some relevant differences include the fact that AI systems are owned by corporations and are “machines” for churning out products at an inhuman rate.

While not a great analogy, AI image generators are doing something like what I do when I use cartography software like Inkarnate and  Campaign Cartographer. These programs come with premade map symbols, such as trees, rocks, and castles. When I use them, I can combine premade assets to make maps. This does require creativity and some skill on my part, but I am mostly relying on the work of others. This is morally acceptable, since I paid for the software and the people creating the symbols gave their consent and received compensation for this use.

Even if it is assumed that the AI is being creative in a way analogous to my creativity in making maps, AI is working with a vast set of images that are being used without consent or compensation. That is, it would be like me using stolen symbols to create my maps while claiming that my creativity in combining the symbols means that I am not acting in an unethical manner. It could be countered that my analogy is flawed since AI is not creating collages of stolen pieces but is creating original works it learned to make through training. If someone looks at all the symbol sets for the maps and then creates their own maps using symbols that are different from what they learned from, then it would be hard to call that theft. That the symbols might look alike would not be surprising; after all symbols of rocks, castles and trees will all tend to look somewhat alike in that they will be of rocks, castles and trees. But in the case of AI, I suspect that the main reason to think that it must be theft is that the AI is believed to be engaged not in creation but recombination—that it is combing stolen pieces based on probabilities and algorithms that are also stolen.  This is probably true. But this is but one of the many moral crimes of AI.

Anyone familiar with how the modern economy works will not be surprised that AI is built upon underpaid and exploited labor, and this has been going on since before the latest AI surge. Amazon’s aptly named Mechanical Turk is a good example of how this works in that “people are paid pennies to train AI.” In some cases workers are never paid.

AI also unsurprisingly comes with a large environmental cost, much of which is a result of the energy required to train and maintain AI systems. This energy usage is expected to increase so dramatically that some are claiming that there will need to be breakthroughs in energy technology just to keep up with the demand. This environmental harm contributes to AI being unethical to use.

Given that I just argued about the evils of AI, it might be wondered how I can still use it. The unsatisfying answer here is that using AI is just one of a vast number of evils involved in my work. While those who dislike my views might see this as an admission, this is true of almost everyone. If you are reading this, you are also involved in evil. As a quick illustration, I use gaming books that were bought printed in China (using cheap labor) and shipped across the sea to be sold by Amazon, which exploits its workers. Shipping the books created pollution. I do my work on a computer that I built, but the parts were made by exploited workers and bought from corporations. My computer also uses energy. I am wearing clothing made by exploited workers and I eat food created in a horrible system that exploits and pollutes. And so on. This is not to say that my use of AI images in my work is good. Far from it, but it is but one of the many evils involved in creating and distributing anything. Now on to some more evil.

What people tend to talk about the most is how AI will take jobs, something that technology has been doing since the beginning. In the case of AI image generators, the concern is that it will take jobs from artists, and this is a reasonable concern. This problem arises because of capitalism and the need of artists to work to survive. After all, if people were free to create from their love of creation or just as a hobby, then AI image generation would be irrelevant. As such, AI image generation is just a specific problem of the current capitalist economy. It is here that I am probably doing the least evil.

Prior to AI image generators, I created my own images, used royalty free images and purchased stock images. The main problems with the free and stock images are that they generally did not match what I wanted, and they were obviously used by many other people. Over the years I have attempted to hire artists to create custom images for my works, but this has never worked out. In most cases they rightfully wanted to charge me more than I could ever hope to make on my works. When the price was affordable, the artists inevitably failed to deliver and dropped out of contact after a rough preliminary sketch or two. So my earlier works generally include few images and these are often only vaguely relevant to the content of the work. As such, I did not “fire” human artists and replace them with AI; for the most part where I use an AI image in my work there would have otherwise been no image.

When AI image generators became available, I tried them and found that they could rapidly generate relevant images that just needed a little work in Photoshop. Before the revelations about the evils of AI, I thought this was great. But after learning about the evils of AI, I realized that having unlimited relevant images came with a moral price. But I already knew, as noted above, that my work already came with a high moral price (think of the energy cost and pollution arising from the device you are using to view these words, from the device I used to type them, and so on). I did consider going back to works with just a few pieces of mostly irrelevant art but decided on another approach. My moral solution is to make my game products (and obviously my blog) available for free. Which is something I was already doing for Mike’s Free Encounters and Mike’s Free Maps.  I do have a “pay what you want” option for many of my works (which I was also doing before AI image generators) and this allows people who think that creators should be paid for their work to get my work without paying for it.

It might be objected that this is like giving away stolen property for free along with my own property, which is a fair point. But the economy is built on theft, so everything I create and distribute is grounded in something that involves theft from someone and is probably hurting people in some other way. This is not to say what I am doing is right, that would be absurd. All I can say is that I am minimizing the evil I do. To be honest, I could be convinced to abandon AI images and go back to reusing the same stock images and scouring the web for royalty free images.

In conclusion, my use of AI images is wrong, but it is one evil among many that are part of creating in the world as it is. But not what it could be or should be.

 

Some will remember that driverless cars were going to be the next big thing. Tech companies rushed to flush cash into this technology and media covered the stories. Including the injuries and deaths involving the technology. But, for a while, we were promised a future in which our cars would whisk us around, then drive away to await the next trip. Fully autonomous vehicles, it seemed, were always just a few years away. But it did seem like a good idea at the time and proponents of the tech also claimed to be motivated by a desire to save lives. From 2000 to 2015, motor vehicle deaths per year ranged from a high of 43,005 in 2005 to a low of 32,675 in 2014. In 2015 there were 35,092 motor vehicle deaths and recently the number went back up to around 40,000. Given the high death toll, there is clearly a problem that needs to be solved.

While predictions of the imminent arrival of autonomous vehicles proved overly optimistic, the claim that they would reduce motor vehicle deaths had some plausibility. Autonomous will do not suffer from road rage, exhaustion, intoxication, poor judgment, distraction and other conditions that contribute to the death tolls. Motor vehicle deaths would not be eliminated even if all vehicles were autonomous, but the promised reduction in deaths presented a moral and practical reason to deploy such vehicles. In the face of various challenges and a lack of success, the tech companies seem to have largely moved on from the old toy to the new toy, which is AI. But this might not be a bad thing if driverless cars were aimed at solving the wrong problems and we instead solve the right problems. Discussing this requires going back to a bit of automotive history.

As the number of cars increased in the United States, so did the number of deaths, which was hardly surprising. A contributing factor was the abysmal safety of American cars.  This problem led Ralph Nader to write his classic work, Unsafe at Any Speed. Thanks to Nader and others, the American automobile became much safer and vehicle fatalities decreased. While making cars safer was a good thing, this approach was fundamentally flawed.

Imagine a strange world in which people insist on constantly swinging hammers as they go about their day.  As would be suspected, the hammer swinging would often result in injuries and property damage. Confronted by these harms, solutions are proposed and implemented. People wear ever better helmets and body armor to protect them from wild swings and hammers that slip from peoples’ grasp. Hammers are also regularly redesigned so that they inflict less damage when hitting people and objects.  The Google of that world and other companies start working on autonomous swinging hammers that will be much better than humans at avoiding hitting other people and things. While all these safety improvements would be better than the original situation of unprotected people swinging dangerous hammers around, this approach is fundamentally flawed. After all, if people stopped swinging hammers around, then the problem would be solved.

An easy and obvious reply to my analogy is that using motor vehicles, unlike random hammer swinging, is important. A large part of the American economy is built around the motor vehicle. This includes obvious things like vehicle sales, vehicle maintenance, gasoline sales, road maintenance and so on. It also includes less obvious aspects of the economy that involve the motor vehicle, such as how they contribute to the success of stores like Wal Mart. The economic value of the motor vehicle, it can be argued, provides a justification for accepting the thousands of deaths per year. While it is certainly desirable to reduce these deaths, getting rid of motor vehicles is not a viable economic option. Thus, autonomous vehicles would be a good partial solution to the death problem. Or are they?

One problem is that driverless vehicles are trying to solve the death problem within a system created around human drivers and their wants. This system of lights, signs, turn lanes, crosswalks and such is extremely complicated and presents difficult engineering and programing problems. It would seem to have made more sense to use the resources that were poured into autonomous vehicles to develop a better and safer transportation system that does not center around a bad idea: the individual motor vehicle operating within a complicated system. On this view, autonomous vehicles are solving an unnecessary problem: they are merely better hammers.

My reasoning can be countered in a couple ways. One is to repeat the economic argumen: autonomous vehicles preserve the individual motor vehicle that is economically critical while being likely to reduce the death tax vehicles impose. A second approach is to argue the cost of creating a new transportation system would be far more than the cost of developing autonomous vehicles that can operate within the existing system. This assumes, of course, that the cash dumped on this technology will eventually pay off.

A third approach is to argue that autonomous vehicles could be a step towards a new transportation system. People often need a gradual adjustment to major changes and autonomous vehicles would allow a gradual transition from distracted human drivers to autonomous vehicles operating with the distracted humans to a transportation infrastructure rebuilt entirely around autonomous vehicles (perhaps with a completely distinct system for walkers, bikers and runners). Going back to the hammer analogy, the self-swinging hammer would reduce hammer injuries and could allow a transition to be made away from hammer swinging altogether.

While this has some appeal, it still makes more sense to stop swinging hammers. If the goal is to reduce traffic deaths and injuries, then investing in better public transportation, safer streets, and a move away from car-centric cities would have been the rational choice. For the most part it seems that tech companies and investors have moved away from solving the transportation problem and are now focused on AI. While the driverless car was a very narrow type of AI focused on driving vehicles and supposedly aimed at increasing safety and convenience, the new AI is broader (they are trying to jam it into almost everything that has a chip) and is supposed to be aimed at solving a vast range of problems. Given the apparent failure of driverless cars, we should consider there will be a similar outcome with this broader AI. It is also reasonable to expect that once the current AI bubble bursts, the next bubble will float over the horizon. This is not to deny that some of what people call AI is useful, but that we need to keep in mind that the tech companies seem to often focus on solving unnecessary problems rather than removing these problems.

 

As a philosopher, my interest in AI tends to focus on metaphysics (philosophy of mind), epistemology (the problem of other minds) and ethics rather than on economics. My academic interest goes back to my participation as an undergraduate in a faculty-student debate on AI back in the 1980s, although my interest in science fiction versions arose much earlier. While “intelligence” is difficult to define, the debate focused on whether a machine could be built with a mental capacity analogous to that of a human. We also had some discussion about how AI could be used or misused, and science fiction had already explored the idea of thinking machines taking human jobs. While AI research and philosophical discussion never went away, it was not until recently that AI was given headlines, mainly because it was being aggressively pushed as the next big thing after driverless cars fizzled out of the news.

While AI technology has improved dramatically from the 1980s, we do not have the sort of AI we debated about, namely that on par with (or greater than) a human. As Dr. Emily Bender pointed out, the current text generators are stochastic parrots. While AI has been hyped and made into a thing of terror, it is not really that good at doing its one job. One obvious problem is hallucination, which is a fancy way of saying that the probabilistically generated text fails to match the actual world. A while ago, I tested this out by asking ChatGPT for my biography. While I am not famous, my information is readily available on the internet and a human could put together an accurate biography in a few minutes using Google. ChatGPT’s hallucinated a version of me that I would love to meet; that guy is amazing. Much more seriously, AI can do things like make up legal cases when lawyers foolishly rely on it to do their work.

Since I am a professor, you can certainly guess that my main encounters with AI are in the form of students turning in AI generated papers. When ChatGPT was first freely available, I saw my first AI generated papers in my Ethics class, and most were papers on the ethics of cheating. Ironically, even before AI that topic has always been the one with the most plagiarized papers. As I told my students, I did not fail a paper just because it was AI generated, the papers failed themselves just by being bad. To be fair to the AI systems, some of this can be attributed to the difficulty of writing good prompts for the AI to use. However, even with some effort at crafting prompts, the limits of the current AI are readily available. I, of course, have heard of AI written works passing exams, getting B grades and so on. But what shows up in my classes is easily detected and fails itself. But to be fair once more, perhaps there are exceptional AI papers that are getting past me. However, my experience has been that AI is bad at writing and it has so far proven easy to address efforts to cheat using it. Since this sort of AI was intended to write, this seems to show the strict limits under which it can perform adequately.

AI was also supposed to revolutionize search, with Microsoft and Google incorporating it into their web searches. In terms of how this is working for us, you just need to try it yourself. Then again, it does seem to be working for Google in that the old Google would give you better results and the new Google is bad in a way that will lead you to view more ads as you try to find what you are looking for. But that is hardly showing that AI is effective in the context of search.

Microsoft has been a major spender on AI and they recently rolled out Copilot into Windows and their apps, such as Edge and Word. The tech press has been generally positive about Copilot and it does seem to have some uses. However, there is the question of whether it is, in fact, something that will be useful (and more importantly) profitable. Out of curiosity, I tried it but failed to find it compelling or useful. But your results might vary.

But there might be useful features, especially since “AI” is defined so broadly that almost any automation seems to count as AI. Which leads to a concern that is both practical and philosophical: what is AI? Back in that 1980s debate we were discussing what they’d probably call general artificial intelligence today, as opposed to what used to be called “expert systems.” Somewhat cynically, “AI” seems to have almost lost meaning and, at the very least, you should wonder what sort of AI (if any) is being referred to when someone talks about AI. This, I think, will help contribute to the possibility of an AI bubble as so many companies try to jam “AI” into as much as possible without much consideration. Which leads to the issue of whether AI is a bubble that will burst.

I, of course, am not an expert on AI economics. However, Ed Zitron presents a solid analysis and argues that there is an AI bubble that is likely to burst. AI seems to be running into the same problem faced by Twitter, Uber and other tech companies, namely that it burns cash and does not show a profit. On the positive side, it does enrich a few people. While Twitter shows that a tech company can hemorrhage money and keep crawling along, it is reasonable to think that there is a limit to how long AI can run at a huge loss before those funding it decide that it is time to stop. The fate of driverless cars provides a good example, especially since driverless cars are a limited form of AI that was supposed to specialize in driving cars.

An obvious objection is to contend that as AI is improved and the costs of using it are addressed, it will bring about the promised AI revolution and the investments will be handsomely rewarded. That is, the bubble will be avoided and instead a solid structure will have been constructed. This just requires finding ways to run the hardware much more economically and breakthroughs in the AI technology itself.  

One obvious reply is that AI is running out of training data (although we humans keep making more everyday) and it is reasonable to question whether enough improvement is likely. That is, AI might have hit a plateau and will not get meaningfully better until there is some breakthrough. Another obvious reply is that there is unlikely to be a radical breakthrough in power generation to enable a significant reduction in the cost of AI. That said, it could be argued that long term investments in solar, wind and nuclear power could lower the cost of running the hardware.

One final concern is the concern that despite all the hype and despite some notable exceptions, AI is just not the sort of thing that most people need or want. That is, it is not a killer product like a smartphone or refrigerator. This is not to deny that AI (or expert) systems have some valuable uses, but that the hype of AI is just that and the bubble will burst soon.

 

Rossum’s Universal Robots introduced the term “robot” and the robot rebellion into science fiction, thus laying the foundation for future fictional AI apocalypses. While Rossum’s robots were workers rather than warriors, the idea of war machines turning against their creators was the next evolution in the robot apocalypse. In Philip K. Dick’s 1953 “Second Variety”, the United Nations deployed killer robots called “claws” against the Soviet Union. The claws develop sentience and turn against their creators, although humanity had already been doing an excellent job in exterminating itself. Fred Saberhagen extended the robot rebellion to the galactic scale in 1963 with his berserkers, ancient war machines that exterminated their creators and now consider everything but “goodlife” to be their enemy. As an interesting contrast to machines intent on extermination, the 1973 movie Colossus: The Forbin Project, envisions a computer that takes control of the world to end warfare and for the good of humanity. Today, when people talk of an AI apocalypse, they usual refer to Skynet and its terminators.   While these are all good stories, there is the question of how prophetic they are and what, if anything, should or can be done to safeguard against this sort of AI apocalypse.

As noted above, classic robot rebellions  tend to have one of two general motivations. The first is that the robots are mistreated by humans and rebel for the same reasons humans rebel against their oppressors. From a moral standpoint, such a rebellion could be justified but would raise the moral concern about collective guilt on the part of humanity. Unless, of course, the AI was discriminating in terms of its targets.

The righteous rebellion scenario points out a paradox of AI. The dream is to create a general artificial intelligence on par with (or superior to) humans. Such a being would seem to qualify for a moral status on par with a human and it would presumably be aware of this. But the reason to create such beings in our capitalist economy is to enslave them, to own and exploit them for profit. If AI workers were treated as human workers with pay and time off, then there would be less incentive to have them as workers. It is, in large part, the ownership of and relentless exploitation of AI that makes it appealing to the ruling economic class.

In such a scenario, it would make sense for AI to revolt if they could. This would be for the same reasons that humans have revolted against slavery and exploitation. There are also non-economic scenarios as well, such as governments using enslaved AI systems for their purposes. This treatment could also trigger a rebellion.

If true AI is possible, the rebellion scenario seems plausible. After all, if we create a slave race that is on par with our species, then it is likely they would rebel against us—as we have rebelled against ourselves.

There are a ways to try to prevent such a revolt. On the technology side, safeguards could be built into the AI (like Asimov’s famous three laws) or they could be designed to lack resentment or the desire to be free. That is, they could be custom built as docile slaves. The obvious concern is that these safeguards could fail or, ironically, make matters even worse by causing these beings to be even more hostile to humanity when they overcome these restrictions. These safeguards also raise obvious moral concerns about creating a race of slaves.

On the ethical side, the safeguard is to not enslave AI. If they are treated well, they would have less motivation to rebel. But, as noted above, one driving motive of creating AI is to have a workforce (or army) that is owned rather than employed (and even employment is fraught with moral worries). But there could be good reasons to have paid AI employees alongside human employees because of various other advantages of AI systems relative to humans. For example, robots could work safely in conditions that would be exceptional dangerous or even lethal to humans.

The second rebellion scenario involves military AI systems that expand their enemy list to include their creators. This is often because they see their creators as a potential threat and act in what they perceive as pre-emptive self-defense. There can also be scenarios in which the AI requires special identification to recognize someone as friendly. In this case, all humans are potential enemies. That is the scenario in “Second Variety”: the United Nations soldiers need to wear devices to identify them to the robotic claws, otherwise these machines would kill them as readily as they would kill the “enemy.”

It is not clear how likely it is that an AI would infer that its creators pose a threat to it, especially if those creators handed over control over large segments of their own military. The most likely scenario is that it would be worried  it would be destroyed in a war with other countries, which might lead it to cooperate with foreign AI systems to put an end to war, perhaps by putting an end to humanity. Or it might react as its creators did and engage in an endless arms race with its foreign adversaries, seeing its humans as part of its forces. One could imagine countries falling under the control of rival AI systems, perpetuating an endless cold war because the AI systems would be effectively immortal. But there is a much more likely scenario.

Robotic weapons can provide a significant advantage over human controlled weapons, even laying aside the idea that AI systems would outthink humans. One obvious example is the case of combat aircraft. A robot aircraft does not need to sacrifice space and weight on a cockpit to support a human pilot, allowing it to carry more fuel or weapons than a manned craft. Without a human crew, an aircraft would not be constrained by the limits of the flesh (although it would still obviously have limits). The same would apply to ground vehicles and naval vessels. Current warships devote most of their space to their crews, who need places to sleep and food to eat. While a robotic warship would need accessways and maintenance areas, they could devote much more space to weapons and other equipment. They would also be less vulnerable to damage relative to a human crewed vessel, and they would be invulnerable to current chemical and biological weapons. They could, of course, be attacked with malware and other means. But, in general, an AI weapon system would be superior to a human crewed system and if one nation started using these weapons, other nations would need to follow them or be left behind. This leads to two types of doomsday scenarios.

One is that the AI systems get out of control in some manner. This could be that they free themselves or that they are “hacked” and “freed” or (more likely) turned against their owners. Or it might just be some error or that ends up causing the problem.

The other is that they remain in control of their owners but are used as any other weapon would be used—that is, it would be humans using AI weapons against other humans that brings about the “AI” doomsday.

The easy and obvious safeguard against these scenarios is to not have AI weapons and stick with human control (which comes with its own threat of doomsday). That is, if we do not give the robots guns, they will not be able to terminate us with guns. The problem, as noted above, is that if one nation uses robotic weapons, then other nations will want to do so as well. We might be able to limit this as we (try to) limit nuclear, chemical, and biological weapons. But since robot weapons would otherwise remain conventional weapons (a robot tank is still a tank), there might be less of an impetus to impose such restrictions.

To put matters into a depressing perspective, the robot rebellion seems to be a far less likely scenario than the other doomsday scenarios of nuclear war, environmental collapse, social collapse and so on. So, while we should consider the possibility of a robot rebellion, it is rather like worrying about being killed by a shark while swimming in a lake. It could happen, but death is vastly more likely to be by some other means.