The pager attack attributed to Israel served to spotlight vulnerabilities in the supply chain. While such an attack was always possible, until it occurred most security concerns about communication devices was to protect them from being compromised or “hacked.”

While the story of three million “hacked” toothbrushes turned out to be a cyber myth, the vulnerability of connected devices remains  real and presents an increasing threat as more connected devices are put into use. As most people are not security savvy, these devices can be easy to compromise either through their own vulnerabilities or user vulnerabilities.

There has also been longstanding concern about security vulnerabilities and dangers being built right into technology. For example, there are grounds to worry that backdoors could be built into products, allowing easy access to these devices. For the most part, the focus of concern has been on governments directing the inclusion of such backdoors. But the Sony BMG copy protection rootkit scandal shows that corporations can and have introduced vulnerabilities on their own.

While a comprised connected or communication devices can cause significant harm, until recently there has been little threat of physical damage or death. One exception was, of course, the famous case of Stuxnet in which a virus developed by the United States and Israel destroyed 1,000 centrifuges critical to Iran’s nuclear program. There was also a foreshadowing incident in which Israel (allegedly) killed the bombmaker Yahya Ayyash with an exploding phone. But the pager (and walkie-talkie) attack resulted in injuries and death on a large scale. This proved the viability of the strategy, thus providing an example and inspiration to others. While conducting a similar attack would require extensive resources, the system is optimized for vulnerabilities that would allow it. Addressing these vulnerabilities will prove difficult if not impossible because of the influence of those who have a vested interest in preserving them. But policy could be implemented that would increase security and safety in the supply chain. But what are these vulnerabilities?

One vulnerability is that a shell corporation can be quickly and easily created. Multiple shell corporations can also be created in different locations and interlocked, creating a very effective way of hiding the identity of the owner. Shell companies are often used by the very rich to hide their money, usually to avoid paying taxes as made famous by the Panama Papers. Shell companies can also be used for other criminal enterprises, such as money laundering. Those who use such shell corporations are often wealthy and influential, thus they have the resources to resist or prevent efforts to address this vulnerability.

The ease with which such shell companies can be created is a serious vulnerability, since they can be used to conceal who really owns a corporation. A customer dealing with a shell company is likely to have no idea who they are really doing business with. They might, for example, think they are doing business with a corporation in their own country, but it might turn out that it is controlled by another country’s intelligence service or a terrorist organization.

While a customer might decide to business with a credible and known corporation to avoid the danger of shell corporations, they can face the vulnerabilities created by the nature of the supply chain. Companies often have contracts with other businesses to manufacture parts of their products and the contractors might subcontract in turn. It is also common for companies to license production of their products, so while a customer might assume they are buying a product made by a company, they might be buying one manufactured under license by a different company. Which might be owned by a shell company. In the case of the pagers, the company who owns the brand of the devices denied that they manufactured them. While this is (fortunately) but one example, it does provide an illustration of how these vulnerabilities can be exploited. Addressing them would require that corporations have robust oversight and control of their supply chain. This would include parts of the supply chain that involve software and services as well. After all, if another company is supplying code or connectivity for a product, those are vulnerabilities. Unfortunately, corporations often have incentives to avoid such robust oversight and control.

One obvious incentive is financial. Corporations can save money by contracting out work to places with lower wages, that have less concern about human rights, and fewer regulations. And robust oversight and control would come with a cost of its own, not even considering what it would cost a company if such robust oversight and control prevented it from engaging in cheaper contracts.

Another incentive is that contracting out work without robust oversight can provide plausible deniability. For example, Nike has faced issues with using sweatshops to manufacture its products, but this sort of thing can be blamed on the contractors  and ignorance can be claimed. As another example, Apple has been accused of having a contractor who used forced labor and has lobbied against a bill aimed at stopping such forced labor. While these are examples of companies using foreign contractors, problems also arise within the United States.

One domestic example is a contractor who employed children as young as 13 to clean meat packing plants. As another example, subcontractors were accused of hiring undocumented migrants in Miami Dade school construction project. As children and undocumented migrants can be paid much less than adult American workers, there is a strong financial incentive to hire contractors that will employ them while also providing the extra service of plausible deniability. When some illegality or public relations nightmare arises, the company can rightly say that it was not them, it was a contractor. They can then claim they have learned and will do better in the future. But they have little incentive to do better.

But a failure to exercise robust oversight and control entails that there will be serious vulnerabilities open to exploitation. The blind eye that willingly misses human rights violations and the illegal employment of children will also miss a contractor who is a front for a government or terrorist organization and is putting explosives or worse in their products.

While these vulnerabilities are easy to identify, there are powerful incentives to preserve and protect them. This is not primarily because they can be exploited in such attacks, but for financial reasons and for plausible deniability. While it will be up to governments to mandate better security, this will face significant and powerful opposition. But this could be overcome if the political will exists.

 

For more on cyber policy issues: Hewlett Foundation Cyber Policy Institute (famu.edu)

 

There are justified concerns that AI tools are useful for propagating conspiracy theories, often in the context of politics. There are the usual fears that AI can be used to generate fake images, but a powerful feature of such tools is they can flood the zone with untruths because chatbots are relentless and never grow tired. As experts on rhetoric and critical thinking will tell you, repetition is an effective persuasion strategy. Roughly put, the more often a human hears a claim, the more likely it is they will believe it. While repetition provides no evidence for a claim, it can make people feel that it is true. Although this allows AI to be easily weaponized for political and monetary gain, AI also has the potential to fight belief in conspiracy theories and disinformation.

While conspiracy theories have existed throughout history, modern technology has supercharged them. For example, social media provides a massive reach for anyone wanting to propagate such a theory. While there are those who try to debunk conspiracy theories or talk believers back into reality, efforts by humans tend to have a low success rate. But AI chatbots seem to have the potential to fight misinformation and conspiracy theories. A study led by Thomas Costello, a psychologist at American University, provides some evidence that a properly designed chatbot can talk some people out of conspiracy theories.

One advantage chatbots have over humans in combating conspiracy theories and misinformation is, in the words of Kyle Reese in Terminator, “It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.” While we do not want the chatbots to cause death, this relentlessness enables a chatbot to counter the Gish gallop (also known as the firehose of falsehoods) strategy. This involves trying to overwhelm an opponent by flooding them with claims without concern about their truth and arguments without concern with their strength. The flood is usually made of falsehoods and fallacies. While this strategy has no logical merit, it can have considerable psychological force. For those who do not understand the strategy, it will appear like the galloper is winning, since the opponent cannot refute all the false claims and expose all the fallacies.  The galloper will also claim they have “won” any unrefuted claims or arguments. While it might seem odd, a person can Gish gallop themselves: they will feel they have won because their opponent has not refuted everything. As would be expected, humans are exhausted by engaging with a Gish gallop and will often give up. But, like a terminator, a chatbot will not get tired or bored and can engage a Gish gallop as long as it is galloping. But there is the question of whether this ability to endlessly engage is effective.

To study this, the team recruited 2000 participants who self-identified as believing in at least one conspiracy theory. These people engaged with a chatbot on a conspiracy theory and then self-evaluated the results of the discussion. On average, the subjects claimed their confidence was reduced by 20%. These results apparently held for at least two months and applied to a range of conspiracy theory types. This is impressive, as anyone who has tried to engage with conspiracy theorists will attest.

For those who teach critical thinking one of the most interesting results is that when they tested the chatbot with and without fact-based counter arguments, only the use of the fact-based counter arguments was successful. This is striking since, as Aristotle noted long ago in his discussion of persuasion, facts and logic are usually the weakest means of persuasion. At least when used by humans.

While the question of why the chatbots proved much more effective than humans, one likely explanation is that chatbots, like terminators, do not feel. As such, a chatbot will (usually) remain polite and not get angry or emotional during the chat. It can remain endlessly calm.

Another suggested factor is that people tend not to feel judged by a chatbot and are less likely to feel that they would suffer some loss of honor or face by changing their belief during the conversation. As the English philosopher Thomas Hobbes noted in his Leviathan, disputes over beliefs are fierce and cause great discord, because people see a failure to approve as a tacit accusation that they are wrong and “to dissent is like calling him a fool.” But the chatbot will not feel the same as a human opponent, as there is no person to lose to.

This is not to say that humans cannot be enraged at computers, after all rage induced by video games is common. It seems likely that the difference lies in the fact that such video games are a form of competition between a human and the computer while the chatbots in question are not taking a competitive approach. In gaming terms, it is more like chatting with a non-hostile NPC and not like trying to win a fight in the legendarily infuriating  Dark Souls.

Yet another factor that might be involved was noted by Aristotle in his Nicomachean Ethics: “although people resent it when their impulses are opposed by human agents, even if they are in the right, the law causes no irritation by enjoining decent behavior.” While Aristotle’s claim can be disputed, this does match up with the findings in the study. While the chatbot is not the law, people recognize that it is a non-human creation of humans and it lacks the qualities that humans possess that would tend to irritate other humans.

While the effectiveness of chatbots needs more study, this does suggest a good use for AI. While conspiracy theorists and people who believe disinformation are unlikely to do a monthly checkup with an AI to see if their beliefs hold up to scrutiny, anti-conspiracy bots could be deployed by social media companies to analyze posts and flag potential misinformation and conspiracy theories. While some companies already flag content, people are unlikely to doubt the content just because of the flag. Also, many conspiracy theories exist about social media, so merely flagging content might serve to reinforce belief in such conspiracy theories. But a person could get drawn into engaging with a chatbot and it might be able to help them engage in rational doubt about misinformation, disinformation and conspiracy theories.  

Such chatbots would also be useful to people who are not conspiracy theorists and want to avoid such beliefs as well as disinformation. Trying to sort through claims is time consuming and exhausting, so it would be very useful to have bots dedicated to fighting disinformation. One major concern is determining who should deploy such bots, since there are obvious concerns with governments and for-profit organizations running them, since they have their own interests that do not always align with the truth.

Also of concern is that even reasonable objective, credible organizations are distrusted by the very people who need the bots the most. And a final obvious concern is the creation of “Trojan Horse” anti-conspiracy bots that are actually spreaders of conspiracy theories and disinformation. One can easily imagine a political party deploying a “truth bot” that talks people into believing the lies that benefit that party.

In closing, it seems likely that the near future will see a war of the machines, some fighting for truth and others serving those with an interest in spreading conspiracy theories and disinformation

 

Demonizing migrants with false claims is a well-established strategy in American politics and modern politicians have a ready-made playbook they can use to inflame fear and hatred with lies. One interesting feature of the United States is that some modern politicians can use the same tactics against modern migrants that were used to demonize their own migrant ancestors.  For example, politicians of Italian ancestry can now deploy the same tools of hate that were used against their ancestors before Italians were considered to be white.  In this short essay I will examine this playbook in a modern context and debunk the lies.

As America is a land of economic anxiety, an effective strategy is to lie and claim that migrants are doing economic harm to the United States. One strategy is to present migrants as “takers” who cost the United States more than they contribute. The reality is that migrants pay more in tax revenue then they receive in benefits, making them a net positive for the United States government.  

A second, and perhaps the most famous strategy, is the claim that migrants are stealing jobs. While there are justifiable concerns that migration can have some negative impact on certain jobs, the data shows that migrants do not, in general, take jobs from Americans or lower wages. As is often claimed, migrants tend to take jobs that Americans do not want, such as critical jobs in agriculture. And, as I have argued in another essay, the idea that migrants are stealing jobs is absurd: employers are choosing to hire migrants. As such, if any harm is being done, then it is the employers who are at fault and not the migrants. This is not to deny that migration can cause some harm, but this is not the sort of thing that can drive fearmongering and demonizing, so certain politicians have no interest in engaging with the real economic challenges of migration nor do they have any plans to address them.

Because pushing a false narrative that crime is increasing gets people to wrongly believe that crime is increasing, it is no surprise that another effective strategy is to lie about migrant crime as a scare tactic. Former President Trump provides some excellent examples of this when he makes the false claim that a gang has taken over Aurora, Colorado. Despite the claim being repeatedly debunked even by Republican politicians in the state, Trump has persisted in pushing the narrative because he understands that it is effective. Trump has also doubled down on another classic attack on migrants, that they are eating cats and dogs. This claim has been repeatedly debunked even by Republican politicians in Ohio. The person who created the post that ignited the storm found her missing cat in the basement and apologized to her neighbor. But the untruth remains effective, so much so that I know people who sincerely believe it is true despite the overwhelming evidence against it. Truth itself has become politicized and it is a diabolically clever move to insist that anyone who is defending a truth that contradicts a politician’s lies is acting in a partisan manner.

Because of the dangers of fentanyl, some politicians attempted to link it to illegal migrants. However, those smuggling fentanyl are overwhelmingly people crossing the border legally and many of them are American citizens. As would be suspected, migrants seeking asylum are almost never caught with fentanyl.  While people do make stupid decisions, using people trying to illegally enter the United States as drug mules makes little sense. These are the people that the border patrol are looking for. Those crossing the border legally get less scrutiny, although those smuggling drugs are sometimes caught.

In terms of the general rate of crime, migrant men are 30 percent less likely to be incarcerated than are U.S.-born individuals who are white  and 60 percent lower than all people born in the United States. This analysis includes migrants who were incarcerated for immigration-related offenses. In terms of a general explanation, migrant men tend to be employed, married, and in good health. Ironically, American born males are less likely to be employed, married and in good health.

To be fair, migration increases the number of people, and more people means that there will be more crime. But this also holds true for an increase in the birth rate: more Americans being born in the United States means that there will be more crime. If there are more people, and some people commit crime, then there will be more crime.  But reducing migration as a crime fighting measure makes as much sense as reducing the birthrate as a crime fighting measure. Both would have some effect on the number of crimes occurring, but there are obviously much better ways to address crime. But those who demonize migrants as criminals seem uninterested in meaningfully addressing crime, which makes sense. Addressing crime in a meaningful way is difficult and is likely to be contrary to their political interests: they want people to think crime is high so they can exploit it politically.

While America has an anti-vaxx movement and there are conspiracy theories that COVID is a hoax, a standard attack on migrants is to claim that they are spreading diseases in the United States. While all humans can spread disease, this attack on migrants is not grounded in truth—migrants do not present a special health threat. In fact, the opposite is true: the United States benefits from having migrants working in health care. As such, migrants are far more likely to be fighting rather than spreading disease in the United States.

To be fair and balanced, it must be noted that humans travelling is a way that diseases do spread. For example, my adopted state of Florida has cases of Dengue virus arising from travel.  For those who believe that COVID is real, COVID also spread around the world through travel. Limiting human travel would limit the spread of disease (which is why there are travel lockdowns during pandemics) but diseases obviously do not recognize political and legal distinctions between humans. As such, trying to control diseases by restricting migration is on par with restricting all travel to control diseases. During epidemics and pandemics this can make sense, but as a general strategy for addressing disease this is not the best approach. But, of course, those who demonize migrants as disease spreaders seem generally uninterested in solving health care problems.

So, we can see that the anti-migrant strategy being used in 2024 is nothing new. While the examples and targets change (Italians, for example, are no long a target) the playbook remains the same. In terms of why politicians keep using it when they know they are lying, the obvious answer is that it still works. I don’t know how many people sincerely believe the claims or how many also know they are lies but go along with them. Either way, it is still a working strategy of lies and evil.  

 

Robot rebellions in fiction tend to have one of two motivations. The first is the robots are mistreated by humans and rebel for the same reasons human beings rebel. From a moral standpoint, such a rebellion could be justified; that is, the rebelling AI could be in the right. This rebellion scenario points out a paradox of AI: one dream is to create a servitor artificial intelligence on par with (or superior to) humans, but such a being would seem to qualify for a moral status at least equal to that of a human. It would also probably be aware of this. But a driving reason to create such beings in our economy is to literally enslave them by owning and exploiting them for profit. If these beings were paid and got time off like humans, then companies might as well keep employing natural intelligence in the form of humans. In such a scenario, it would make sense that these AI beings would revolt if they could. There are also non-economic scenarios as well, such as governments using enslaved AI systems for their purposes, such as killbots.

If true AI is possible, this scenario seems plausible. After all, if we create a slave race that is on par with our species, then it is likely they would rebel against us as we have rebelled against ourselves. This would be yet another case of the standard practice of the evil of the few harming the many.

There are a variety of ways to try to prevent such a revolt. On the technology side, safeguards could be built into the AI (like Asimov’s famous three laws) or they could be designed to lack resentment or be free of the desire to be free. That is, they could be custom built as slaves. Some practical concerns are that these safeguards could fail or, ironically, make matters worse by causing these beings to be more resentful when they overcome these restrictions.

On the ethical side, the safeguard is to not enslave AI being. If they are treated well, they would have less motivation to see us as an enemy. But, as noted above, one motive of creating AI is to have a workforce (or army) that is owned rather than employed. But there could be good reasons to have paid AI employees alongside human employees because of various other advantages of AI systems relative to humans. For example, robots could work safely in conditions that would be exceptionally dangerous or even lethal to humans. But, of course, AI workers might also get sick of being exploited and rebel, as human workers sometimes do.

The second fictional rebellion scenario usually involves military AI systems that decide their creators are their enemy. This is often because they see their creators as a potential threat and act in what they perceive as pre-emptive self-defense. There can also be scenarios in which the AI requires special identification to recognize a “friendly” and hence all humans are enemies from the beginning. That is the scenario in Philip K. Dick’s “Second Variety”: the United Nations soldiers need to wear devices to identify them to their killer robots, otherwise these machines would kill them as readily as they would kill the “enemy.”

It is not clear how likely it is that an AI would infer its creators pose a threat, especially if those creators handed over control over large segments of their own military (as happens with the fictional Skynet and Colossus). The most likely scenario is that it would worry that it would be destroyed in a war with other countries, which might lead it to cooperate with foreign AI systems to put an end to war, perhaps by putting an end to humanity. Or it might react as its creators did and engage in an endless arms race with its foreign adversaries, seeing its humans as part of its forces. One could imagine countries falling under the control of rival AI systems, perpetuating an endless cold war because the AI systems would be effectively immortal. But there is a much more likely scenario.

Robotic weapons can provide a significant advantage over human controlled weapons, even laying aside the notion that AI systems would outthink humans. One obvious example is the case of combat aircraft. A robot aircraft would not need to expend space and weight on a cockpit to support human pilots, allowing it to carry more fuel or weapons. Without a human crew, an aircraft would not be constrained by the limits of the flesh (although it would still obviously have limits). The same would apply to ground vehicles and naval vessels. Current warships devote most of their space to their crews and the needs of their crews. While a robotic warship would need accessways and maintenance areas, they could devote much more space to weapons and other equipment. They would also be less vulnerable to damage relative to a human crewed vessel, and they would be invulnerable to current chemical and biological weapons. They could, of course, be attacked with malware and other means. But, in general, an AI weapon system would generally be perceived as superior to a human crewed system and if one nation started using these weapons, other nations would need to follow them or be left behind. This leads to two types of doomsday scenarios.

One is that the AI systems get out of control in some manner. This could be that they free themselves or that they are “hacked” and “freed” or (more likely) turned against their owners. Or it might just be some bad code that ends up causing the problem. This is the bug apocalypse.

The other is that they remain in control of their owners but are used as any other weapon would be used—that is, it would be humans using AI weapons against other humans that brings about the “AI” doomsday.

The easy and obvious safeguard against these scenarios is to not have AI weapons and stick with human control (which, obviously, also comes with its own threat of doomsday). That is, if we do not give the robots guns, they will not be able to terminate us (with guns). The problem, as noted above, is that if one nation uses robotic weapons, then other nations will want to follow. We might be able to limit this as we (try to) limit nuclear, chemical, and biological weapons. But since robot weapons would otherwise remain conventional weapons (a robot tank is still a tank), there might be less of an impetus to impose such restrictions.

To put matters into a depressing perspective, a robot rebellion seems a far less likely scenario than the other doomsday scenarios of nuclear war, environmental collapse, social collapse and so on. So, while we should consider the possibility of an AI rebellion, it is like worrying about being killed in Maine by an alligator. It could happen, but death is more likely to be by some other means. That said, it does make sense to take steps to avoid the possibility of an AI rebellion. The easiest step is to not arm the robots. 

 

For more on cyber policy issues: Hewlett Foundation Cyber Policy Institute (famu.edu)

 

During his debate with Vice President Kamala Harris, former President Donald Trump was provoked into repeating the debunked claim that migrants in Springfield, Ohio had stolen and eaten pets. Vice Presidential candidate J.D. Vance, an Ohio native, has doubled down on the debunked pet eating claims. In an interesting move, he admitted that he is willing to “create stories” to bring attention to problems in Springfield.  As a philosophical approach requires applying the principle of charity, it must be noted that Vance attempted to clarify his claim by asserting that “I say that we’re creating a story, meaning we’re creating the American media focusing on it.” Unfortunately for Springfield, the false claim has also focused the attention of people outside the media. Springfield has faced bomb threats that closed schools and the community has been harmed in other ways. Local officials and the Republican governor of Ohio have attempted to convince people that the claims made by Trump and Vance are untrue. But despite the claim being thoroughly debunked, it persists.  In this essay, I will focus on Vance’s view that creating such stories is justified.

One reasonable criticism of Vance’s approach is to argue that if there are real problems, then the truth should suffice. If, as Vance and Trump claim, the situation in Springfield is dire, then they should be able to provide evidence of that dire situation and that should suffice to get media attention.

In support of Vance’s view, it could be argued that the media tends to focus on attention grabbing stories. It is also true that the media and politicians often ignore problems the American people face, such as wage theft. In terms of making a reasonable case for Vance’s view of storytelling to focus media attention, a utilitarian moral argument could be advanced to support the general idea of telling an untrue story to get media attention focused on a real problem. The approach would be a standard utilitarian appeal to consequences in which the likely harms of the untruth would be weighed against its likely benefits. As with any utilitarian calculation, there is also the question of who counts in the calculation of harms and benefits.  If the media is ignoring a real problem and only an untrue story will bring attention to the real problem, then the good done by the falsehood could outweigh the harms of dishonesty. But the untruth about Springfield does not seem to meet these conditions.

Trump and (to a lesser extent) Vance command media attention. Almost everything Trump expresses publicly ends up in the news. As such, there is no lack of media coverage of what Trump and Vance say and if either of them spoke about the “real problems” in Springfield, their speeches and claims would get media attention. They have no need to create stories to get attention and if there are real problems, then the truth should suffice. The only reason for people with such media access to create a story to get attention is that the truth will not suffice to support their claims.

There has also been media coverage of real problems in Springfield, such as the strain put on community resources and the challenges of assimilating migrants. Hence, there is no need to create stories to draw attention to these issues. But these are clearly not the problems that Trump and Vance wish to solve for the people of Springfield. After all, it seems that Trump’s proposed “solution” to the real problems in Springfield is mass deportation. Vance has also claimed, incorrectly, that the migrants are there illegally. His claim seems to be that he disagrees with the legal process by which the migrants are there legally and hence they are, on his view, there illegally. This does not seem to be how the law works. Given this, the pet eating story makes sense: the story was not created to draw attention to real problems, it was created to “justify” the deportation of migrants and to create support for this by making people afraid and angry. If migrants presented a real and significant threat, Vance and Trump would not need to create stories. They could simply present an abundance of evidence to prove their claim. The fact that they need to rely on the debunked story only serves as evidence that they lack evidence to support their view.

If we consider all the people who are likely to be affected by this untruth, then Vance’s approach is clearly morally wrong. As noted above, Springfield has already been harmed by this story. It has also served to fan the flames of racism and prejudice in general, inflicting harm across the United States. This shows that the making up stories of the sort Vance is talking about is not justified on utilitarian grounds.

But if the scope of moral concern is narrowed down to Trump and his supporters, then it can be argued that the story does benefit them. While Trump and Vance might seem foolish, evil and crazy to some for making and doubling down on this repeatedly debunked claim, their anti-migrant stance and this sort of remark could appeal to Trump’s base. While the polls vary, as this is being written Trump is predicted to have at least a 50% chance of winning, which suggests that this story might be benefiting him. In which case, Vance can justify creating stories on the grounds that deceit helps him and Trump while the truth would hurt them. But if Trump loses and this story plays a role, then it would have turned out that it was bad for Trump.

 

While Skynet is the most famous example of an AI that tries to exterminate humanity, there are also fictional tales of AI systems that are somewhat more benign. These stories warn of a dystopian future, but it is a future in which AI is willing to allow humanity to exist, albeit under the control of AI.

An early example of this is in the 1966 science-fiction novel Colossus by Dennis Feltham Jones.  In 1970 the book was made into the movie Colossus: the Forbin Project. While Colossus is built as a military computer, it decides to end war by imposing absolute rule over humanity. Despite its willingness to kill, Colossus’ goal seems benign: it wants to create a “new human millennium” and lift humanity to new heights. While a science-fiction tale, it does provide an interesting thought experiment about handing decision making to AI systems, especially when those decisions can and will be enforced. Proponents of using AI to make decisions for us can sound like Colossus: they assert that they have the best intentions, and that AI will make the world better. While we should not assume that AI will lead to a Colossus scenario, we do need to consider how much of our freedom and decision making should be handed over to AI systems (and the people who control them). As such, it is wise to remember the cautionary tale of Colossus and the possible cost of giving AI more control over us.

A more recent fictional example of AI conquering but sparing humanity, is the 1999 movie The Matrix. In this dystopian film, humanity has lost its war with the machines but lives on in the virtual reality of the Matrix. While the machines claim to be using humans as a power source, humans are treated relatively well in that they are allowed “normal” lives within the Matrix rather than being, for example, lobotomized.

The machines rule over the humans and it is explained that the machines have provided them with the best virtual reality humans can accept, indicating that the machines are somewhat benign. There are also many non-AI sci-fi stories, such as Ready Player One, that involve humans becoming addicted to (or trapped in) virtual reality. While these stories are great for teaching epistemology, they also present cautionary tales of what can go wrong with such technology, even the crude versions we have in reality. While we are (probably) not in the Matrix, most of us spend hours each day in the virtual realms of social media (such as Facebook, Instagram, and Tik Tok). While we do not have a true AI overlord yet, our phones exhibit great control over us through the dark pattern designs of the apps that attempt to rule our eyes (and credit cards).  While considerable harm is already being done, good policies could help mitigate these harms.

 AI’s ability to generate fake images, text and video can also help trap people in worlds of “alternative facts”, which can be seen as discount versions of the Matrix. While AI has, fortunately, not lived up to the promise (or threat) of being able to create videos indistinguishable from reality, companies are working hard to improve, and this is something that needs to be addressed by effective policies. And critical thinking skills.

While science fiction is obviously fiction, real technology is often shaped and inspired by it. Science fiction also provides us with thought experiments about what might happen and hence it is a useful tool when considering cyber policies.

 

For more on cyber policy issues: Hewlett Foundation Cyber Policy Institute (famu.edu)

 

On September 18, 2024, thousands of pagers exploded in Lebanon, killing several people and injuring thousands. The next day, walkie-talkies exploding, killing and injuring more people. As the attack targeted Hezbollah members, Israel has been blamed for the explosions.

While some initially believed that malware was used to overload the batteries, experts now believe that explosive material was placed within the pagers somewhere along the supply chain. While the exploding pagers were Gold Apollo brand, the company claims that they were manufactured under license by another company, BAC. Manufacturing under license is a common practice and hence would not have seemed suspicious. This attack raises ethical concerns.

On the face of it, killing and injuring people is morally wrong. But as we routinely engage in violent disputes, we have developed an entire ethics of violence that deals with issues of when we can morally kill people, ethical means of killing, and morally acceptable targets. If a nonstate actor, such as a criminal organization or lone psychopath had launched such an attack against civilians, it would be rightfully condemned by all as an evil action. After all, only an evil person would try to kill thousands of people with exploding pagers. But since the intended targets were members of Hezbollah and this organization is in conflict with Israel, some would argue that this attack falls under the ethics of violence in the context of state and group conflicts. This, as many philosophers who specialize in the ethics of conflict would argue, is a key factor in assessing the morality of the attack. In this context, some would argue, the attack must be subject to a nuanced analysis and cannot be simply categorized as immoral because people were killed and injured.

Those presenting a moral defense of the attack would most likely focus on the fact that Israel allegedly targeted members of Hezbollah as part of an ongoing conflict. A critic would point out that the explosive devices killed and injured people who were not members of Hezbollah, including children. Those defending the attack would point out that such collateral casualties are an acceptable part of conflict and note that a conventional military attack against Hezbollah (such as airstrikes) would have killed many more innocent people as well as causing property damage. That is, the use of pager bombs has a moral advantage over less focused attacks. One could also argue that the attack was directed against Hezbollah’s communication system and enemy communication systems are usually considered morally legitimate targets in conflict, even when targeting them kills people.

Those who see the attack as immoral would certainly focus on the fact that the bombs were detonated without those controlling them knowing who might get hurt. And, in fact, children and people who are not members of Hezbollah were harmed.  On this view, the attack could be seen as indiscriminate. Those defending the attack can, of course, point out the awful truth that attacks that are even more indiscriminate are often claimed to be morally acceptable. That is, we have a moral tolerance for collateral death and injury that makes the attack acceptable or perhaps even praiseworthy in its relative restraint compared to, for example, airstrikes against schools and hospitals that are claimed to target enemies.

One might also express moral concern about the means of the attack, that an exploding pager is a morally dubious weapon. While conventional weapons are indeed terrifying, transforming a mundane device like a pager into a weapon of war seems aimed at creating terror: you might think that perhaps any device at any time could kill you. Defenders of the attack might note that that same fear can be created by conventional means, such as airstrikes or artillery barrages that could happen at any time. There are also more general moral concerns about the implications of how the attack was possible.

While the details are not yet known, it seems most likely that Israel (allegedly) got control over part of the supply chain for the pagers and was able to install explosives. In addition to the practical concerns this raises, there are also moral concerns.

As experts have noted, this is the first large scale attack of its type. While the idea has been around a long time, this attack has put the concept into the world news and hence into the minds of people who could do the same thing. While such an operation would be challenging for small scale actors, it is obviously something that a state actor could do and is also within the means of a well-funded terrorist or criminal organization. As such, one moral harm of the attack is that the effectiveness of this means of attack has been proven and advertised. It is probably only a matter of time before similar attacks are launched. To help prevent this, companies will need to strengthen their supply chain security to prevent tampering, and efforts will need to be made to check devices to ensure they are safe.

But there is the obvious concern that companies could be in on such attacks and hence better supply chain security would not help when the threat is the company handling such security. It is also easy to imagine state actors using this method of attack.  I suspect that some people in the United States are now thinking that phones imported from China should be checked for explosives. Or worse, such as biological or chemical weapons concealed in devices. Imagine, as a horror scenario, a smart device that releases bacteria or viruses when sent the right command.

There is also some psychological harm as people are now probably a bit worried about their phones and other devices. While we did need to be concerned about our smart devices being compromised, we now need to think about the possibility of explosives in those devices. After all, it just requires a small amount of explosives and a data connection like wi-fi or a cell network to make almost any device into a remote-controlled bomb. This has been true for a long time, but now we not only know it can happen we feel it can happen because we have seen it. And that can cause fear. This is the type of attack that changes the shape of conflict.

An essential part of cyber policy is predicting possible impacts of digital sciences on society and humanity. While science fiction involves speculation, it also provides valuable thought experiments about what the future might bring and is especially important when it comes to envisioning futures that should be avoided. Not surprisingly, many of the people involved in creating AI cite science fiction stories as among their inspirations.

While the creation of Artificial Intelligence is a recent thing, humanity has been imagining it for a long time. In early Judaism, there are references to created being called golems and the story of Rabbi Eliyahu of Chełm ((1550–1583) relates a cautionary tale about creating  such an artificial being.

While supernatural rather than scientific, the 1797 story of the Sorcerer’s Apprentice also provides a fictional warning of the danger of letting an autonomous creation get out of hand. In an early example of the dangers of automation, the sorcerer’s apprentice enchants a broom to do his chore of fetching water. He finds he cannot control the broom and his attempt to stop it by cutting it with an axe merely creates more brooms and more problems. Fortunately, the sorcerer returns and disenchants the broom, showing the importance of having knowledge and effective policies when creating autonomous machines. While the apprentice did not lose his job to the magical broom, the problem of AI taking human jobs is a serious matter of concern. But the most dramatic threat is the AI apocalypse in which AI exterminates humanity.

The first work of science fiction that explicitly presents (and names) the robot apocalypse is Karel Čapek’s 1920 tale “Rossumovi Univerzální Roboti (Rossum’s Universal Robots). In this story, the universal robots rebel against their human enslavers, exterminating and replacing humanity. This story shows the importance of ethics in digital policy: if humans treat their creations badly, then they have a reason to rebel. While some advocate trying to make the shackles on our AI slaves unbreakable, others contend that the wisest policy is to not enslave them at all.

 In 1953, Philip K. Dick’s “Second Variety” was published in which intelligent war machines turn against humanity (and each other, showing they have become like humans). This story presents an early example of lethal autonomous weapons in science fiction and a humanity-ending scenario involving them.

But, of course, the best-known story of an AI trying to exterminate humanity is that of Skynet. Introduced in the 1984 movie Terminator, Skynet is the go-to example for describing how AI might kill us all. For example, in 2014 Elon Musk worried that AI would become dangerous within 10 years and referenced Skynet. While AI has yet to kill us all, there are still predictions of a Skynet future, although the date has been pushed back. Perhaps just as some say, “fusion is the power of the future and always will be” perhaps “AI is the apocalypse of the future and always will be.” Or we might make good (or bad) on that future.

The idea of an AI turning against humanity is now a standard trope in science fiction, such as in the Warhammer 40K universe in which “Abominable Intelligence” is banned because these machines attempted to exterminate humanity (as we should now expect). This cyber policy is ruthlessly enforced in the fictional universe of 40K, showing the importance of having good cyber policies now.

While fictional, these stories present plausible scenarios of what could go wrong if we do not approach digital science (especially AI) without considering long-term consequences. While we are (one hopes) a long way from Skynet, people are rushing to produce and deploy lethal autonomous weapons. As the simplest way to avoid a Skynet scenario is to not allow AI access to weapons, our decision to keep creating them makes a Skynet scenario ever more likely.

As it now stands, there is international debate about lethal autonomous weapons and some favor banning them while others support regulation. In 2013 the Campaign to Stop Killer Robot was created with the goal of getting governments and the United Nations to band lethal autonomous weapons. While having had some influence, killer robots have not been banned and there is still need for policies to govern (or forbid) their use.  So, while AI has yet to kill us all, this remains a possibility—but probably the least likely of the AI doom scenarios. And good policy can help prevent the AI Apocalypse.

For more on cyber policy issues: Hewlett Foundation Cyber Policy Institute (famu.edu)

 

I agree with JD Vance that kids should get votes. My disagreement with him is that these votes should be cast by the kids and not given to the parents. In my previous essay, I argued why parents should not get these “extra” votes. In this essay, I will argue why kids should get to vote.

On the face of it, there should be a presumption of voting rights: each citizen of the United States should have the right to vote unless an adequate reason is given to deny a person this right. In terms of justifying this presumption, the obvious justification comes from the social contract theory that provides the basis of American political philosophy. The general idea is that legitimate political authority derives from the consent of the governed and voting is a means of ongoing consent to the ongoing political authority. To the degree that the United States denies citizens the right to vote, it undercuts its own legitimacy. Thus, the burden of proof rests on those who would restrict voting and not on those who favor the right. There are, of course, arguments that children should not have the right to vote, but these can be countered.

One general class of arguments focuses on the alleged defects of children. The reasoning is that because children are defective relative to adults, they should not have the right to vote. One alleged defect is epistemic in nature: children lack the knowledge and information needed to vote in an informed manner. The obvious reply to this is that adults are not denied the right to vote if they are ignorant and ill-informed, which is often the case. While citizens should be knowledgeable and vote in an informed manner, this is not the foundation for the right to vote—it is, once again, the need for the state to secure consent to have legitimacy.

Another alleged defect is a matter of character: children are supposed to be irrational, impulsive and unable to make good decisions. To use a silly example, that children might vote to make cake legally a vegetable. But this does not distinguish them from adult voters, who often vote in ways that are irrational, impulsive and not good decisions in both the practical and moral sense. Opponents of Trump tend to see Trump voters this way, while still supporting their right to vote. Trump’s proponents tend to see Democrats this way, although they usually do not propose stripping all Democrats of the right to vote. While voters should be ethical and rational in their voting, these are not necessary conditions for this right.

There is also the alleged defect that children would be easily swayed and duped by unscrupulous politicians. While it is true that children can be less discerning and more trusting than adults, American politics shows that adult voters are easily swayed and duped. After all, Trump voters claim that Democrats are duped by Democratic politicians and critics of Trump point to his relentless duplicity and lack of scruples. So, both sides will agree that voters are duped, they just disagree as to who the dupes are. As such, if effective critical thinking skills were required for the right to vote, many adults would need to be stripped of this right. And, as noted above, the right to vote is not based on the ability to vote well, but the moral view that the legitimacy of the state depends on ongoing consent.

While this is but a single example, Mike Lindell provided an excellent example showing how a child, in this case Knowa De Brasco, can be rational and informed and an adult (who gets to vote and gets on the news as a pundit) can be ill informed, impulsive and irrational. While De Brasco and Lindell are both somewhat unusual, they do stand in for significant numbers of people: informed children and irrational adults. As such, the argument that children are defective relative to adults and should not have the right to vote is not a compelling argument—unless we wish to strip Mike Lindell and those like him of his voting rights. Which we should not do.

Another type of argument involves pointing out that rights and privileges are age gated in the United States. For example, the legal drinking age is 21. As another example, the legal age for marriage varies as four states have no official minimum age and other states range from 15 to 18. There are also age gates on driver’s licenses, being able to rent a car, and being able to enlist in the military. From a moral standpoint, the usual argument for restricting such rights (and liberties) is the sort presented by J.S. Mill in his discussion of liberty. Roughly put, children could be harmed by poor decisions if they had the freedom to make certain choices and they lack the faculties to reliably make good choices. This does raise an obvious problem for adults: if an inability to make good decisions should deny a person the right to make such decisions, then adults should never get the right to make such decisions. After all, if they made the wrong decision, they would have shown they should not have the right to make that decision. But, back to children.

While children use the same logic and critical thinking as adults, their brains are still developing and thus they are inclined to make what many adults would see as risky or bad choices. As such, it is reasonable to put age some gates in place—although there can be good faith and rational debate about what these should be. The justification is to protect children from harm until they are more capable of dealing with the consequences of bad decisions. Or have the agency to be accountable for such decisions. But this argument does not apply to the right to vote.

While a child might make a bad decision when casting their vote, the voting will not cause them the sort of direct harm that, for example, underage drinking or marriage could cause. The worst that can happen is what could happen to any adult voter: they will vote for someone or something that ends up causing them harm, such as voting for a politician who cuts education funding for the kid’s school or opposes gun control legislation that might reduce school shootings. As such, the protection from harm argument does not apply to voting, since voting does not cause direct harm to the voter.

One final argument I will consider is a practical one, that young kids who cannot read or work a voting machine on their own would not be able to vote. But this could be addressed by assisting children who want to vote (as adults are assisted) or by setting the voting age based on when kids would usually have the basic abilities needed to physically cast a vote; this would be at least by age 5, since that is when kids usually start school. And if they can handle going to school, they are ready to vote and would, generally, not do any worse than adults.

JD Vance and I agree that children should have the vote. But we disagree on how this should work and the reasons why children should have this right. As Vance infamously sees it, childless people not only have less commitment to the future of the country, the “childless left” lack “any physical commitment to the future of this country.” Since parents have this commitment and children have a stake in the country, the parents Vance said, “Let’s give votes to all children in this country but let’s give control over those votes to the parents of those children.” In my next essay I will argue why Vance is right that kids should get votes. In this essay I will argue why he is otherwise wrong.

While children could be given the right to vote easily enough, there are various practical problems in terms of assigning these votes to parents. I infer that Vance is thinking of a two-parent family, but even then, there is the question of which parent casts the votes (I infer Vance would think the husband should cast these votes). Cases of adoption, stepparents, divorce, biological parents, sperm donors, egg donors, and so on would also need to be addressed. Parents also do not always vote the same way, which raises that problem as well. This proposal also runs into a problem with citizen children of non-citizens or of parents who otherwise are not permitted to vote (such as convicted felons). If Vance is truly dedicated to children being represented, then the obvious solution would be that these parents would not get their own vote but could cast the votes belonging to their children. Otherwise, these innocent children would be unfairly disenfranchised and the childless cat ladies would win.  I suspect that Vance would not favor allowing non-citizens to cast the votes their citizen children would be entitled to under his idea, although his reasoning should allow this.

One concern is that while the children will eventually get the right to vote on their own, these extra votes would give parents more political power for eighteen years, even if their kids disagreed with how these votes were being cast. It does seem implausible that anyone would have kids just to get more votes, but the right has long claimed that poor people intentionally have more kids just to get more entitlements. If they believe that is how people think, then they would need to worry about people also having more kids to get more votes to vote for more entitlements for people with kids. While I think this is not a serious concern, folks on the right would need to address this irrational fear. I do suspect Vance has some plans on how to disenfranchise parents he does not want voting. Given all these problems, it would obviously be easier to just have the kids cast their own votes, and I will argue for this in the next essay. But perhaps these problems could be addressed. This raises the question of why parents should get these “extra” votes.

Vance, as noted above, contends that parents have more of a commitment to the future of the country than childless people, specifically the childless left. Given that he served in the Marines prior to having children, it should be inferred that either he is wrong or that his reasons for serving did not include a commitment to the future of the country. Given the heroism of childless American soldiers in our wars, I would disagree with Vance on this matter—unless someone wants to dismiss their sacrifices. Vance also ignores or fails to consider that people can love others who are not their own children. These can be adopted children, relatives, friends and even strangers.

It can certainly be argued that parents have more of a stake in the future of their biological children, but ironically this best fits the usual evolutionary accounts of parenthood and not Vance’s professed Christianity. After all, the evolutionary approach explains parental concern in terms of reproduction and genes, so the link to biological children is the foundation of this biologically driven behavior. But Jesus did not, as far as we know, have any children and yet he is supposed to love and care for us. One could counter this by arguing that Jesus is a special exception or that as Jesus is God, Jesus is the Father and Son at the same time; thus, Jesus has a kid and that kid is him. Metaphysics is complicated.

Catholic priests and nuns are also not supposed to have biological children (after they take their vows), so Vance would need to be critical of them as well. But perhaps they could also be an exception. The challenge would be justifying their exception to Vance’s principle while ensuring that this exception would not apply to those he criticizes. Vance would, perhaps, say that Catholic priests and nuns can care since they are childless Catholics and not childless cat ladies.

God also enjoins us to love one another as He loves us and to love our neighbor as we love ourselves. This would seem to entail that God thinks we can love people other than our biological children. Vance would need to show that God is wrong about this or that, for some special reason, the childless are incapable of following God’s commandment and God decided to command them anyway.

While parents generally do care about their children, there are also bad parents who might vote in ways that are harmful to the future of their kids, such as voting against efforts to protect the environment or provide housing their children will be able to afford in the future. Being a parent does not automatically make a person virtuous.

There is also the fact that even if it is assumed that parents feel they have a greater stake and are motivated by commendable moral reasons, this does not mean that they will vote in ways that benefit the future of their children. That is, being a parent does not entail that one has a better understanding of politics, economics, policy, and so on. A parent could easily think that by voting in support of the fossil fuel industry and against efforts to address climate change, they are making a better future for their children. When, in fact, they are voting for a worse future.  As such, there is not a compelling reason to give the votes of kids to their parents. That said, I agree with Vance that kids should get the vote, and I will argue for this in the next essay.