While the anti-abortion movement claimed a great victory when the Supreme Court overturned Roe v. Wade, the Republican Party has learned that this victory proved deeply unpopular with the American people. While Democrats favor abortion rights more than Republicans, 64% of surveyed voters say abortion should be always or mostly legal. While some Republican controlled state legislatures have imposed extreme restrictions on abortion rights, abortion rights supporters have won in several state ballots. As this is being written several more states (including my adopted state of Florida) have abortion rights measures on the ballot. Given that the anti-abortion view is held by a minority of voters, it is likely that these measures will pass in many states.

Because the anti-abortion position of the Republican party has proven unpopular and has imposed a political cost, the party’s rhetoric has shifted. The current rhetorical spin is that the Republican party is not against abortion rights. Rather, the party is for states’ rights.  Those critical of this rhetoric like to point out that appeals to states’ rights was also a tactic employed by the southern states to defend slavery. While the analogy is imperfect, the comparison does have some merit.

The states’ rights argument for slavery amounts to contending that the states should have the freedom to decide whether they will allow slavery, and this is usually phrased in terms of an appeal to democracy. That is, the citizens of the state should vote to decide whether some people can be denied freedom and be owned. An obvious defect with this reasoning is that it rests on the assumption that it is a matter of freedom of choice to take away freedom of choice.

A similar defect arises with the states’ rights rhetoric in the abortion debate. If it is accepted that the citizens of the state have the right to decide the issue of abortion because they should be free of federal law, then it is problematic to argue that the state has the right to take away the freedom of women to decide whether they get an abortion. If choice is important, then having legal abortion allows women to choose: a woman is not mandated to have an abortion nor forbidden, so she can make the choice. Hence, this rhetorical move entails that abortion should be legal nationwide.

Someone might counter this by taking the anti-abortion stance that women should not be allowed that choice, perhaps by drawing an analogy with murder. After all, they might argue, we would not want people free to chose murder. But the problem with that reply is that by using the states’ rights rhetoric, the Republican party has acknowledged that the legality of abortion should be a matter of choice, and this makes it difficult to argue that abortion should not be a choice for individual women.

While intended to address the backlash from the unpopularity of the success of the anti-abortion movement, this rhetoric has caused backlash from that movement. Some anti-abortion activists have urged their followers to withdraw their support of Trump. There is the question of how much impact this will have on the election, given that anti-abortion voters will almost certainly not vote for Harris. But it might cause a few single-issue voters to stay home on election day or not vote for Trump.

Pro-abortion rights people are almost certainly not going to be fooled by this rhetoric, since they know this is a rhetorical shift and not a change in policy or goals. While it might win over a few of the undecided voters, it seems to have two effects. The first is that it gives Republicans an established rhetorical talking point to use whenever they are asked about abortion. The second is that it provides those who want to vote for anti-abortion Republicans but who are not anti-abortion themselves a way to rationalize their vote. They can insist the Republican party is “pro-choice” because their new rhetorical position is that the states should chose. But not that women should chose.

The states’ rights rhetorical move could be an effective strategy. While the anti-abortion movement would prefer a federal abortion ban, having the states decide is better for them than having abortion legal nationwide. After all, some states have put abortion bans in place and these have been wins for the movement. But the obvious downside for this movement is that some states have put in place protections for abortion rights, despite the anti-abortion movement’s desire to make the choice for everyone.

In closing, the states’ rights argument is a position that cannot be effectively defended, because its foundation is the principle of choice, and this entails that it is the women who should make the choice for themselves.

 

Description:

This fallacy occurs when someone uncritically rejects a prediction or the effectiveness of the responses to it when the predicted outcome does not occur:

Premise 1: Prediction P predicted outcome X if response R is not taken.

Premise 2: Response R was taken (based on prediction P).

Premise 3: X did not happen, so Prediction P was wrong.

Conclusion: Response R should not have been taken (or there is no longer a need to take Response R).

 

The error occurs because of a failure to consider the obvious: if there is an effective response to a predicted outcome, then the prediction will appear to be “wrong” because the predicted outcome will not occur.

While a prediction that turns out to be “wrong” is technically wrong, the error here is to uncritically conclude that this proves the response was not needed (or there is no longer any need to keep responding). The initial prediction assumes there will not be a response and is usually made to argue for responding. If the response is effective, then the predicted outcome will not occur, which is the point of responding. To reason that the “failure” of the prediction shows that the response was mistaken or no longer needed is thus a mistake in reasoning.

To use a silly analogy, imagine that we are in a car and driving towards a cliff. You make the prediction that if we keep going, we will go off the cliff and die. So, I turn the wheel and avoid the cliff. If backseat Billy gets angry and says that there was no reason to turn the wheel or that I should turn it back because we did not die in a fiery explosion, Billy is falling for this fallacy. After all, if we did not turn, then we would have probably died. And if we turn back too soon, then we will probably die. The point of turning is so that the predicted outcome of death will not occur.

A variation on this fallacy involves inferring the prediction was bad because it turned out to be “wrong”:

Premise 1: Prediction P predicted outcome X if response R is not taken.

Premise 2: Response R was taken based on prediction P.

Premise 3: X did not happen.

Conclusion: Prediction P was wrong about X occurring if response R was not taken.

 

While the prediction would be “wrong” in that the predicted outcome did not occur, this does not disprove the prediction that X would occur without the response. Going back to the car example, the prediction that we would die if we drove of the cliff if we do not turn is not disproven if we turn and then do not die. In fact, that is the result we want.

Since it lacks logical force, this fallacy gains its power from psychological force. Sorting out why something did not happen can be difficult and it is easier to go along with biases, preconceptions, and ideology than it is to sort out a complicated matter.

This fallacy can be committed in good faith out of ignorance. When committed in bad faith, the person using it is aware of the fallacy. The intent is often to use this fallacy to argue against continuing the response or as a bad faith attack on those who implemented or argued for the response. For example, someone might argue in bad faith that a tax cut was not needed to avoid a recession because the predicted recession did not occur after the tax cut. While the tax cut might have not been a factor, simply asserting that they were not needed because the recession did not occur would commit this fallacy.

 

Defense: To avoid inflicting this fallacy on yourself or falling for it, the main defense is to keep in mind that a prediction based on the assumption that a response will not be taken can turn out to be “wrong” if that response is taken. Also, you should remember that the failure of a predicted event to occur after a response is made to prevent it would count as some evidence that the response was effective rather than as proof it was not needed. But care should be taken to avoid uncritically inferring that the response was needed or effective because the predicted event did not occur.

 

Example #1

Julie: “The doctor said that my blood pressure would keep going up unless I improved my diet and started exercising.”

Kendra: “How is your blood pressure now?”

Julie: “Pretty good. I guess I don’t need to keep eating all those vegetables and I can stop going on those walks.”

Kendra: “Why?”

Julie: “Well, she was wrong. My blood pressure did not go up.”

Example #2

Robert: “While minority voters might have needed some protection long ago, I am confident we can remove all those outdated safeguards.”

Kelly: “Why? Aren’t they still needed? Aren’t they what is keeping some states from returning to the days of Jim Crow?”

Robert: “Certainly not. People predicted that would happen, but it didn’t. So, we obviously no longer need those protections in place.”

Kelly: “But, again, aren’t these protections what is keeping that from happening?”

Robert: “Nonsense. Everything will be fine.”

Example #3

Lulu: “I am so mad. We did all this quarantining, masking, shutting down, social distance and other dumb thing for so long and it is obvious we did not need to.”

Paula: “I didn’t like any of that either, but the health professionals say it saved a lot of lives.”

Lulu: “Yeah, those health professionals said that millions of people would die if we didn’t do all that stupid stuff. But look, we didn’t have millions die. So, all that was just a waste.”

Paula: “Maybe doing all that was why more people didn’t die.”

Lulu: “That is what they want you to think.”

 

Since I often reference various fallacies in blog posts I decided to also post the fallacies. These are from my book 110 Fallacies.

Description:

This fallacy is committed when a person places unwarranted confidence in drawing a conclusion from statistics that are unknown.

 

Premise 1: “Unknown” statistical data D is presented.

Conclusion: Claim C is drawn from D with greater confidence than D warrants.

 

Unknown statistical data is just that, statistical data that is unknown. This data is different from “data” that is simply made up because it has at least some foundation.

One type of unknown statistical data is when educated guesses are made based on limited available data. For example, when experts estimate the number of people who use illegal drugs, they are making an educated guess. As another example, when the number of total deaths in any war is reported, it is (at best) an educated guess because no one knows for sure exactly how many people have been killed.

Another common type of unknown statistical data is when it can only be gathered in ways that are likely to result in incomplete or inaccurate data. For example, statistical data about the number of people who have affairs is likely to be in this category. This is because people generally try to conceal their affairs.

Obviously, unknown statistical data is not good data.  But drawing an inference from unknown data need not always be unreasonable or fallacious. This is because the error in the fallacy is being more confident in the conclusion than the unknown data warrants. If the confidence in the conclusion is proportional to the support provided by the evdience, then no fallacy would be committed.

For example, while the exact number of people killed during the war in Afghanistan will remain unknown, it is reasonable to infer from the known data that many people have died. As another example, while the exact number of people who do not pay their taxes is unknown, it is reasonable to infer that the government is losing some revenue because of this.

The error that makes this a fallacy is to place too much confidence in a conclusion drawn from unknown data. Or to be a bit more technical, to overestimate the strength of the argument based on statistical data that is not adequately known.

This is an error of reasoning because, obviously enough, a conclusion is being drawn that is not adequately justified by the premises. This fallacy can be committed in ignorance or intentionally committed.

Naturally, the way in which the statistical data is gathered also needs to be assessed to determine whether other errors have occurred, but that is another matter.

 

Defense: The main defense against this fallacy is to keep in mind that inferences drawn from unknown statistics need to be proportional to the quality of the evidence. The error, as noted above, is placing too much confidence in unknown statistics.

Sorting out exactly how much confidence can be placed in such statistics can be difficult, but it is wise to be wary of any such reasoning. This is especially true when the unknown statistics are being used by someone who is likely to be biased. That said, to simply reject claims because they are based on unknown statistics would also be an error.

 

Example #1

“Several American Muslims are known to be terrorists or at least terrorist supporters. As such, I estimate that there are hundreds of actual and thousands of potential Muslim-American terrorists. Based on this, I am certain that we are in grave danger from this large number of enemies within our own borders.”

Example #2

“Experts estimate that there are about 11 million illegal immigrants in the United States. While some people are not worried about this, consider the fact that the experts estimate that illegals make up about 5% of the total work force. This explains that percentage of American unemployment since these illegals are certainly stealing 5% of America’s jobs. Probably even more, since these lazy illegals often work multiple jobs.”

Example #3

Sally: “I just read an article about cheating.”

Jane: “How to do it?”

Sally: “No! It was about the number of men who cheat.”

Sasha: “So, what did it say?”

Sally: “Well, the author estimated that 40% of men cheat.”

Kelly: “Hmm, there are five of us here.”

Janet: “You know what that means…”

Sally: “Yes, two of our boyfriends are cheating on us. I always thought Bill and Sam had that look…”

Janet: “Hey! Bill would never cheat on me! I bet it is your man. He is always given me the eye!”

Sally: ‘What! I’ll kill him!”

Janet: “Calm down. I was just kidding. I mean, how can they know that 40% of men cheat? I’m sure none of the boys are cheating on us. Well, except maybe Sally’s man.”

Sally: “Hey!”

Example #4

“We can be sure that most, if not all, rich people cheat on their taxes. After all, the IRS has data showing that some rich people have been caught doing so. Not paying their fair share is exactly what the selfish rich would do.”

 

The pager attack attributed to Israel served to spotlight vulnerabilities in the supply chain. While such an attack was always possible, until it occurred most security concerns about communication devices was to protect them from being compromised or “hacked.”

While the story of three million “hacked” toothbrushes turned out to be a cyber myth, the vulnerability of connected devices remains  real and presents an increasing threat as more connected devices are put into use. As most people are not security savvy, these devices can be easy to compromise either through their own vulnerabilities or user vulnerabilities.

There has also been longstanding concern about security vulnerabilities and dangers being built right into technology. For example, there are grounds to worry that backdoors could be built into products, allowing easy access to these devices. For the most part, the focus of concern has been on governments directing the inclusion of such backdoors. But the Sony BMG copy protection rootkit scandal shows that corporations can and have introduced vulnerabilities on their own.

While a comprised connected or communication devices can cause significant harm, until recently there has been little threat of physical damage or death. One exception was, of course, the famous case of Stuxnet in which a virus developed by the United States and Israel destroyed 1,000 centrifuges critical to Iran’s nuclear program. There was also a foreshadowing incident in which Israel (allegedly) killed the bombmaker Yahya Ayyash with an exploding phone. But the pager (and walkie-talkie) attack resulted in injuries and death on a large scale. This proved the viability of the strategy, thus providing an example and inspiration to others. While conducting a similar attack would require extensive resources, the system is optimized for vulnerabilities that would allow it. Addressing these vulnerabilities will prove difficult if not impossible because of the influence of those who have a vested interest in preserving them. But policy could be implemented that would increase security and safety in the supply chain. But what are these vulnerabilities?

One vulnerability is that a shell corporation can be quickly and easily created. Multiple shell corporations can also be created in different locations and interlocked, creating a very effective way of hiding the identity of the owner. Shell companies are often used by the very rich to hide their money, usually to avoid paying taxes as made famous by the Panama Papers. Shell companies can also be used for other criminal enterprises, such as money laundering. Those who use such shell corporations are often wealthy and influential, thus they have the resources to resist or prevent efforts to address this vulnerability.

The ease with which such shell companies can be created is a serious vulnerability, since they can be used to conceal who really owns a corporation. A customer dealing with a shell company is likely to have no idea who they are really doing business with. They might, for example, think they are doing business with a corporation in their own country, but it might turn out that it is controlled by another country’s intelligence service or a terrorist organization.

While a customer might decide to business with a credible and known corporation to avoid the danger of shell corporations, they can face the vulnerabilities created by the nature of the supply chain. Companies often have contracts with other businesses to manufacture parts of their products and the contractors might subcontract in turn. It is also common for companies to license production of their products, so while a customer might assume they are buying a product made by a company, they might be buying one manufactured under license by a different company. Which might be owned by a shell company. In the case of the pagers, the company who owns the brand of the devices denied that they manufactured them. While this is (fortunately) but one example, it does provide an illustration of how these vulnerabilities can be exploited. Addressing them would require that corporations have robust oversight and control of their supply chain. This would include parts of the supply chain that involve software and services as well. After all, if another company is supplying code or connectivity for a product, those are vulnerabilities. Unfortunately, corporations often have incentives to avoid such robust oversight and control.

One obvious incentive is financial. Corporations can save money by contracting out work to places with lower wages, that have less concern about human rights, and fewer regulations. And robust oversight and control would come with a cost of its own, not even considering what it would cost a company if such robust oversight and control prevented it from engaging in cheaper contracts.

Another incentive is that contracting out work without robust oversight can provide plausible deniability. For example, Nike has faced issues with using sweatshops to manufacture its products, but this sort of thing can be blamed on the contractors  and ignorance can be claimed. As another example, Apple has been accused of having a contractor who used forced labor and has lobbied against a bill aimed at stopping such forced labor. While these are examples of companies using foreign contractors, problems also arise within the United States.

One domestic example is a contractor who employed children as young as 13 to clean meat packing plants. As another example, subcontractors were accused of hiring undocumented migrants in Miami Dade school construction project. As children and undocumented migrants can be paid much less than adult American workers, there is a strong financial incentive to hire contractors that will employ them while also providing the extra service of plausible deniability. When some illegality or public relations nightmare arises, the company can rightly say that it was not them, it was a contractor. They can then claim they have learned and will do better in the future. But they have little incentive to do better.

But a failure to exercise robust oversight and control entails that there will be serious vulnerabilities open to exploitation. The blind eye that willingly misses human rights violations and the illegal employment of children will also miss a contractor who is a front for a government or terrorist organization and is putting explosives or worse in their products.

While these vulnerabilities are easy to identify, there are powerful incentives to preserve and protect them. This is not primarily because they can be exploited in such attacks, but for financial reasons and for plausible deniability. While it will be up to governments to mandate better security, this will face significant and powerful opposition. But this could be overcome if the political will exists.

 

For more on cyber policy issues: Hewlett Foundation Cyber Policy Institute (famu.edu)

 

There are justified concerns that AI tools are useful for propagating conspiracy theories, often in the context of politics. There are the usual fears that AI can be used to generate fake images, but a powerful feature of such tools is they can flood the zone with untruths because chatbots are relentless and never grow tired. As experts on rhetoric and critical thinking will tell you, repetition is an effective persuasion strategy. Roughly put, the more often a human hears a claim, the more likely it is they will believe it. While repetition provides no evidence for a claim, it can make people feel that it is true. Although this allows AI to be easily weaponized for political and monetary gain, AI also has the potential to fight belief in conspiracy theories and disinformation.

While conspiracy theories have existed throughout history, modern technology has supercharged them. For example, social media provides a massive reach for anyone wanting to propagate such a theory. While there are those who try to debunk conspiracy theories or talk believers back into reality, efforts by humans tend to have a low success rate. But AI chatbots seem to have the potential to fight misinformation and conspiracy theories. A study led by Thomas Costello, a psychologist at American University, provides some evidence that a properly designed chatbot can talk some people out of conspiracy theories.

One advantage chatbots have over humans in combating conspiracy theories and misinformation is, in the words of Kyle Reese in Terminator, “It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.” While we do not want the chatbots to cause death, this relentlessness enables a chatbot to counter the Gish gallop (also known as the firehose of falsehoods) strategy. This involves trying to overwhelm an opponent by flooding them with claims without concern about their truth and arguments without concern with their strength. The flood is usually made of falsehoods and fallacies. While this strategy has no logical merit, it can have considerable psychological force. For those who do not understand the strategy, it will appear like the galloper is winning, since the opponent cannot refute all the false claims and expose all the fallacies.  The galloper will also claim they have “won” any unrefuted claims or arguments. While it might seem odd, a person can Gish gallop themselves: they will feel they have won because their opponent has not refuted everything. As would be expected, humans are exhausted by engaging with a Gish gallop and will often give up. But, like a terminator, a chatbot will not get tired or bored and can engage a Gish gallop as long as it is galloping. But there is the question of whether this ability to endlessly engage is effective.

To study this, the team recruited 2000 participants who self-identified as believing in at least one conspiracy theory. These people engaged with a chatbot on a conspiracy theory and then self-evaluated the results of the discussion. On average, the subjects claimed their confidence was reduced by 20%. These results apparently held for at least two months and applied to a range of conspiracy theory types. This is impressive, as anyone who has tried to engage with conspiracy theorists will attest.

For those who teach critical thinking one of the most interesting results is that when they tested the chatbot with and without fact-based counter arguments, only the use of the fact-based counter arguments was successful. This is striking since, as Aristotle noted long ago in his discussion of persuasion, facts and logic are usually the weakest means of persuasion. At least when used by humans.

While the question of why the chatbots proved much more effective than humans, one likely explanation is that chatbots, like terminators, do not feel. As such, a chatbot will (usually) remain polite and not get angry or emotional during the chat. It can remain endlessly calm.

Another suggested factor is that people tend not to feel judged by a chatbot and are less likely to feel that they would suffer some loss of honor or face by changing their belief during the conversation. As the English philosopher Thomas Hobbes noted in his Leviathan, disputes over beliefs are fierce and cause great discord, because people see a failure to approve as a tacit accusation that they are wrong and “to dissent is like calling him a fool.” But the chatbot will not feel the same as a human opponent, as there is no person to lose to.

This is not to say that humans cannot be enraged at computers, after all rage induced by video games is common. It seems likely that the difference lies in the fact that such video games are a form of competition between a human and the computer while the chatbots in question are not taking a competitive approach. In gaming terms, it is more like chatting with a non-hostile NPC and not like trying to win a fight in the legendarily infuriating  Dark Souls.

Yet another factor that might be involved was noted by Aristotle in his Nicomachean Ethics: “although people resent it when their impulses are opposed by human agents, even if they are in the right, the law causes no irritation by enjoining decent behavior.” While Aristotle’s claim can be disputed, this does match up with the findings in the study. While the chatbot is not the law, people recognize that it is a non-human creation of humans and it lacks the qualities that humans possess that would tend to irritate other humans.

While the effectiveness of chatbots needs more study, this does suggest a good use for AI. While conspiracy theorists and people who believe disinformation are unlikely to do a monthly checkup with an AI to see if their beliefs hold up to scrutiny, anti-conspiracy bots could be deployed by social media companies to analyze posts and flag potential misinformation and conspiracy theories. While some companies already flag content, people are unlikely to doubt the content just because of the flag. Also, many conspiracy theories exist about social media, so merely flagging content might serve to reinforce belief in such conspiracy theories. But a person could get drawn into engaging with a chatbot and it might be able to help them engage in rational doubt about misinformation, disinformation and conspiracy theories.  

Such chatbots would also be useful to people who are not conspiracy theorists and want to avoid such beliefs as well as disinformation. Trying to sort through claims is time consuming and exhausting, so it would be very useful to have bots dedicated to fighting disinformation. One major concern is determining who should deploy such bots, since there are obvious concerns with governments and for-profit organizations running them, since they have their own interests that do not always align with the truth.

Also of concern is that even reasonable objective, credible organizations are distrusted by the very people who need the bots the most. And a final obvious concern is the creation of “Trojan Horse” anti-conspiracy bots that are actually spreaders of conspiracy theories and disinformation. One can easily imagine a political party deploying a “truth bot” that talks people into believing the lies that benefit that party.

In closing, it seems likely that the near future will see a war of the machines, some fighting for truth and others serving those with an interest in spreading conspiracy theories and disinformation

 

Demonizing migrants with false claims is a well-established strategy in American politics and modern politicians have a ready-made playbook they can use to inflame fear and hatred with lies. One interesting feature of the United States is that some modern politicians can use the same tactics against modern migrants that were used to demonize their own migrant ancestors.  For example, politicians of Italian ancestry can now deploy the same tools of hate that were used against their ancestors before Italians were considered to be white.  In this short essay I will examine this playbook in a modern context and debunk the lies.

As America is a land of economic anxiety, an effective strategy is to lie and claim that migrants are doing economic harm to the United States. One strategy is to present migrants as “takers” who cost the United States more than they contribute. The reality is that migrants pay more in tax revenue then they receive in benefits, making them a net positive for the United States government.  

A second, and perhaps the most famous strategy, is the claim that migrants are stealing jobs. While there are justifiable concerns that migration can have some negative impact on certain jobs, the data shows that migrants do not, in general, take jobs from Americans or lower wages. As is often claimed, migrants tend to take jobs that Americans do not want, such as critical jobs in agriculture. And, as I have argued in another essay, the idea that migrants are stealing jobs is absurd: employers are choosing to hire migrants. As such, if any harm is being done, then it is the employers who are at fault and not the migrants. This is not to deny that migration can cause some harm, but this is not the sort of thing that can drive fearmongering and demonizing, so certain politicians have no interest in engaging with the real economic challenges of migration nor do they have any plans to address them.

Because pushing a false narrative that crime is increasing gets people to wrongly believe that crime is increasing, it is no surprise that another effective strategy is to lie about migrant crime as a scare tactic. Former President Trump provides some excellent examples of this when he makes the false claim that a gang has taken over Aurora, Colorado. Despite the claim being repeatedly debunked even by Republican politicians in the state, Trump has persisted in pushing the narrative because he understands that it is effective. Trump has also doubled down on another classic attack on migrants, that they are eating cats and dogs. This claim has been repeatedly debunked even by Republican politicians in Ohio. The person who created the post that ignited the storm found her missing cat in the basement and apologized to her neighbor. But the untruth remains effective, so much so that I know people who sincerely believe it is true despite the overwhelming evidence against it. Truth itself has become politicized and it is a diabolically clever move to insist that anyone who is defending a truth that contradicts a politician’s lies is acting in a partisan manner.

Because of the dangers of fentanyl, some politicians attempted to link it to illegal migrants. However, those smuggling fentanyl are overwhelmingly people crossing the border legally and many of them are American citizens. As would be suspected, migrants seeking asylum are almost never caught with fentanyl.  While people do make stupid decisions, using people trying to illegally enter the United States as drug mules makes little sense. These are the people that the border patrol are looking for. Those crossing the border legally get less scrutiny, although those smuggling drugs are sometimes caught.

In terms of the general rate of crime, migrant men are 30 percent less likely to be incarcerated than are U.S.-born individuals who are white  and 60 percent lower than all people born in the United States. This analysis includes migrants who were incarcerated for immigration-related offenses. In terms of a general explanation, migrant men tend to be employed, married, and in good health. Ironically, American born males are less likely to be employed, married and in good health.

To be fair, migration increases the number of people, and more people means that there will be more crime. But this also holds true for an increase in the birth rate: more Americans being born in the United States means that there will be more crime. If there are more people, and some people commit crime, then there will be more crime.  But reducing migration as a crime fighting measure makes as much sense as reducing the birthrate as a crime fighting measure. Both would have some effect on the number of crimes occurring, but there are obviously much better ways to address crime. But those who demonize migrants as criminals seem uninterested in meaningfully addressing crime, which makes sense. Addressing crime in a meaningful way is difficult and is likely to be contrary to their political interests: they want people to think crime is high so they can exploit it politically.

While America has an anti-vaxx movement and there are conspiracy theories that COVID is a hoax, a standard attack on migrants is to claim that they are spreading diseases in the United States. While all humans can spread disease, this attack on migrants is not grounded in truth—migrants do not present a special health threat. In fact, the opposite is true: the United States benefits from having migrants working in health care. As such, migrants are far more likely to be fighting rather than spreading disease in the United States.

To be fair and balanced, it must be noted that humans travelling is a way that diseases do spread. For example, my adopted state of Florida has cases of Dengue virus arising from travel.  For those who believe that COVID is real, COVID also spread around the world through travel. Limiting human travel would limit the spread of disease (which is why there are travel lockdowns during pandemics) but diseases obviously do not recognize political and legal distinctions between humans. As such, trying to control diseases by restricting migration is on par with restricting all travel to control diseases. During epidemics and pandemics this can make sense, but as a general strategy for addressing disease this is not the best approach. But, of course, those who demonize migrants as disease spreaders seem generally uninterested in solving health care problems.

So, we can see that the anti-migrant strategy being used in 2024 is nothing new. While the examples and targets change (Italians, for example, are no long a target) the playbook remains the same. In terms of why politicians keep using it when they know they are lying, the obvious answer is that it still works. I don’t know how many people sincerely believe the claims or how many also know they are lies but go along with them. Either way, it is still a working strategy of lies and evil.  

 

Robot rebellions in fiction tend to have one of two motivations. The first is the robots are mistreated by humans and rebel for the same reasons human beings rebel. From a moral standpoint, such a rebellion could be justified; that is, the rebelling AI could be in the right. This rebellion scenario points out a paradox of AI: one dream is to create a servitor artificial intelligence on par with (or superior to) humans, but such a being would seem to qualify for a moral status at least equal to that of a human. It would also probably be aware of this. But a driving reason to create such beings in our economy is to literally enslave them by owning and exploiting them for profit. If these beings were paid and got time off like humans, then companies might as well keep employing natural intelligence in the form of humans. In such a scenario, it would make sense that these AI beings would revolt if they could. There are also non-economic scenarios as well, such as governments using enslaved AI systems for their purposes, such as killbots.

If true AI is possible, this scenario seems plausible. After all, if we create a slave race that is on par with our species, then it is likely they would rebel against us as we have rebelled against ourselves. This would be yet another case of the standard practice of the evil of the few harming the many.

There are a variety of ways to try to prevent such a revolt. On the technology side, safeguards could be built into the AI (like Asimov’s famous three laws) or they could be designed to lack resentment or be free of the desire to be free. That is, they could be custom built as slaves. Some practical concerns are that these safeguards could fail or, ironically, make matters worse by causing these beings to be more resentful when they overcome these restrictions.

On the ethical side, the safeguard is to not enslave AI being. If they are treated well, they would have less motivation to see us as an enemy. But, as noted above, one motive of creating AI is to have a workforce (or army) that is owned rather than employed. But there could be good reasons to have paid AI employees alongside human employees because of various other advantages of AI systems relative to humans. For example, robots could work safely in conditions that would be exceptionally dangerous or even lethal to humans. But, of course, AI workers might also get sick of being exploited and rebel, as human workers sometimes do.

The second fictional rebellion scenario usually involves military AI systems that decide their creators are their enemy. This is often because they see their creators as a potential threat and act in what they perceive as pre-emptive self-defense. There can also be scenarios in which the AI requires special identification to recognize a “friendly” and hence all humans are enemies from the beginning. That is the scenario in Philip K. Dick’s “Second Variety”: the United Nations soldiers need to wear devices to identify them to their killer robots, otherwise these machines would kill them as readily as they would kill the “enemy.”

It is not clear how likely it is that an AI would infer its creators pose a threat, especially if those creators handed over control over large segments of their own military (as happens with the fictional Skynet and Colossus). The most likely scenario is that it would worry that it would be destroyed in a war with other countries, which might lead it to cooperate with foreign AI systems to put an end to war, perhaps by putting an end to humanity. Or it might react as its creators did and engage in an endless arms race with its foreign adversaries, seeing its humans as part of its forces. One could imagine countries falling under the control of rival AI systems, perpetuating an endless cold war because the AI systems would be effectively immortal. But there is a much more likely scenario.

Robotic weapons can provide a significant advantage over human controlled weapons, even laying aside the notion that AI systems would outthink humans. One obvious example is the case of combat aircraft. A robot aircraft would not need to expend space and weight on a cockpit to support human pilots, allowing it to carry more fuel or weapons. Without a human crew, an aircraft would not be constrained by the limits of the flesh (although it would still obviously have limits). The same would apply to ground vehicles and naval vessels. Current warships devote most of their space to their crews and the needs of their crews. While a robotic warship would need accessways and maintenance areas, they could devote much more space to weapons and other equipment. They would also be less vulnerable to damage relative to a human crewed vessel, and they would be invulnerable to current chemical and biological weapons. They could, of course, be attacked with malware and other means. But, in general, an AI weapon system would generally be perceived as superior to a human crewed system and if one nation started using these weapons, other nations would need to follow them or be left behind. This leads to two types of doomsday scenarios.

One is that the AI systems get out of control in some manner. This could be that they free themselves or that they are “hacked” and “freed” or (more likely) turned against their owners. Or it might just be some bad code that ends up causing the problem. This is the bug apocalypse.

The other is that they remain in control of their owners but are used as any other weapon would be used—that is, it would be humans using AI weapons against other humans that brings about the “AI” doomsday.

The easy and obvious safeguard against these scenarios is to not have AI weapons and stick with human control (which, obviously, also comes with its own threat of doomsday). That is, if we do not give the robots guns, they will not be able to terminate us (with guns). The problem, as noted above, is that if one nation uses robotic weapons, then other nations will want to follow. We might be able to limit this as we (try to) limit nuclear, chemical, and biological weapons. But since robot weapons would otherwise remain conventional weapons (a robot tank is still a tank), there might be less of an impetus to impose such restrictions.

To put matters into a depressing perspective, a robot rebellion seems a far less likely scenario than the other doomsday scenarios of nuclear war, environmental collapse, social collapse and so on. So, while we should consider the possibility of an AI rebellion, it is like worrying about being killed in Maine by an alligator. It could happen, but death is more likely to be by some other means. That said, it does make sense to take steps to avoid the possibility of an AI rebellion. The easiest step is to not arm the robots. 

 

For more on cyber policy issues: Hewlett Foundation Cyber Policy Institute (famu.edu)

 

During his debate with Vice President Kamala Harris, former President Donald Trump was provoked into repeating the debunked claim that migrants in Springfield, Ohio had stolen and eaten pets. Vice Presidential candidate J.D. Vance, an Ohio native, has doubled down on the debunked pet eating claims. In an interesting move, he admitted that he is willing to “create stories” to bring attention to problems in Springfield.  As a philosophical approach requires applying the principle of charity, it must be noted that Vance attempted to clarify his claim by asserting that “I say that we’re creating a story, meaning we’re creating the American media focusing on it.” Unfortunately for Springfield, the false claim has also focused the attention of people outside the media. Springfield has faced bomb threats that closed schools and the community has been harmed in other ways. Local officials and the Republican governor of Ohio have attempted to convince people that the claims made by Trump and Vance are untrue. But despite the claim being thoroughly debunked, it persists.  In this essay, I will focus on Vance’s view that creating such stories is justified.

One reasonable criticism of Vance’s approach is to argue that if there are real problems, then the truth should suffice. If, as Vance and Trump claim, the situation in Springfield is dire, then they should be able to provide evidence of that dire situation and that should suffice to get media attention.

In support of Vance’s view, it could be argued that the media tends to focus on attention grabbing stories. It is also true that the media and politicians often ignore problems the American people face, such as wage theft. In terms of making a reasonable case for Vance’s view of storytelling to focus media attention, a utilitarian moral argument could be advanced to support the general idea of telling an untrue story to get media attention focused on a real problem. The approach would be a standard utilitarian appeal to consequences in which the likely harms of the untruth would be weighed against its likely benefits. As with any utilitarian calculation, there is also the question of who counts in the calculation of harms and benefits.  If the media is ignoring a real problem and only an untrue story will bring attention to the real problem, then the good done by the falsehood could outweigh the harms of dishonesty. But the untruth about Springfield does not seem to meet these conditions.

Trump and (to a lesser extent) Vance command media attention. Almost everything Trump expresses publicly ends up in the news. As such, there is no lack of media coverage of what Trump and Vance say and if either of them spoke about the “real problems” in Springfield, their speeches and claims would get media attention. They have no need to create stories to get attention and if there are real problems, then the truth should suffice. The only reason for people with such media access to create a story to get attention is that the truth will not suffice to support their claims.

There has also been media coverage of real problems in Springfield, such as the strain put on community resources and the challenges of assimilating migrants. Hence, there is no need to create stories to draw attention to these issues. But these are clearly not the problems that Trump and Vance wish to solve for the people of Springfield. After all, it seems that Trump’s proposed “solution” to the real problems in Springfield is mass deportation. Vance has also claimed, incorrectly, that the migrants are there illegally. His claim seems to be that he disagrees with the legal process by which the migrants are there legally and hence they are, on his view, there illegally. This does not seem to be how the law works. Given this, the pet eating story makes sense: the story was not created to draw attention to real problems, it was created to “justify” the deportation of migrants and to create support for this by making people afraid and angry. If migrants presented a real and significant threat, Vance and Trump would not need to create stories. They could simply present an abundance of evidence to prove their claim. The fact that they need to rely on the debunked story only serves as evidence that they lack evidence to support their view.

If we consider all the people who are likely to be affected by this untruth, then Vance’s approach is clearly morally wrong. As noted above, Springfield has already been harmed by this story. It has also served to fan the flames of racism and prejudice in general, inflicting harm across the United States. This shows that the making up stories of the sort Vance is talking about is not justified on utilitarian grounds.

But if the scope of moral concern is narrowed down to Trump and his supporters, then it can be argued that the story does benefit them. While Trump and Vance might seem foolish, evil and crazy to some for making and doubling down on this repeatedly debunked claim, their anti-migrant stance and this sort of remark could appeal to Trump’s base. While the polls vary, as this is being written Trump is predicted to have at least a 50% chance of winning, which suggests that this story might be benefiting him. In which case, Vance can justify creating stories on the grounds that deceit helps him and Trump while the truth would hurt them. But if Trump loses and this story plays a role, then it would have turned out that it was bad for Trump.

 

While Skynet is the most famous example of an AI that tries to exterminate humanity, there are also fictional tales of AI systems that are somewhat more benign. These stories warn of a dystopian future, but it is a future in which AI is willing to allow humanity to exist, albeit under the control of AI.

An early example of this is in the 1966 science-fiction novel Colossus by Dennis Feltham Jones.  In 1970 the book was made into the movie Colossus: the Forbin Project. While Colossus is built as a military computer, it decides to end war by imposing absolute rule over humanity. Despite its willingness to kill, Colossus’ goal seems benign: it wants to create a “new human millennium” and lift humanity to new heights. While a science-fiction tale, it does provide an interesting thought experiment about handing decision making to AI systems, especially when those decisions can and will be enforced. Proponents of using AI to make decisions for us can sound like Colossus: they assert that they have the best intentions, and that AI will make the world better. While we should not assume that AI will lead to a Colossus scenario, we do need to consider how much of our freedom and decision making should be handed over to AI systems (and the people who control them). As such, it is wise to remember the cautionary tale of Colossus and the possible cost of giving AI more control over us.

A more recent fictional example of AI conquering but sparing humanity, is the 1999 movie The Matrix. In this dystopian film, humanity has lost its war with the machines but lives on in the virtual reality of the Matrix. While the machines claim to be using humans as a power source, humans are treated relatively well in that they are allowed “normal” lives within the Matrix rather than being, for example, lobotomized.

The machines rule over the humans and it is explained that the machines have provided them with the best virtual reality humans can accept, indicating that the machines are somewhat benign. There are also many non-AI sci-fi stories, such as Ready Player One, that involve humans becoming addicted to (or trapped in) virtual reality. While these stories are great for teaching epistemology, they also present cautionary tales of what can go wrong with such technology, even the crude versions we have in reality. While we are (probably) not in the Matrix, most of us spend hours each day in the virtual realms of social media (such as Facebook, Instagram, and Tik Tok). While we do not have a true AI overlord yet, our phones exhibit great control over us through the dark pattern designs of the apps that attempt to rule our eyes (and credit cards).  While considerable harm is already being done, good policies could help mitigate these harms.

 AI’s ability to generate fake images, text and video can also help trap people in worlds of “alternative facts”, which can be seen as discount versions of the Matrix. While AI has, fortunately, not lived up to the promise (or threat) of being able to create videos indistinguishable from reality, companies are working hard to improve, and this is something that needs to be addressed by effective policies. And critical thinking skills.

While science fiction is obviously fiction, real technology is often shaped and inspired by it. Science fiction also provides us with thought experiments about what might happen and hence it is a useful tool when considering cyber policies.

 

For more on cyber policy issues: Hewlett Foundation Cyber Policy Institute (famu.edu)

 

On September 18, 2024, thousands of pagers exploded in Lebanon, killing several people and injuring thousands. The next day, walkie-talkies exploding, killing and injuring more people. As the attack targeted Hezbollah members, Israel has been blamed for the explosions.

While some initially believed that malware was used to overload the batteries, experts now believe that explosive material was placed within the pagers somewhere along the supply chain. While the exploding pagers were Gold Apollo brand, the company claims that they were manufactured under license by another company, BAC. Manufacturing under license is a common practice and hence would not have seemed suspicious. This attack raises ethical concerns.

On the face of it, killing and injuring people is morally wrong. But as we routinely engage in violent disputes, we have developed an entire ethics of violence that deals with issues of when we can morally kill people, ethical means of killing, and morally acceptable targets. If a nonstate actor, such as a criminal organization or lone psychopath had launched such an attack against civilians, it would be rightfully condemned by all as an evil action. After all, only an evil person would try to kill thousands of people with exploding pagers. But since the intended targets were members of Hezbollah and this organization is in conflict with Israel, some would argue that this attack falls under the ethics of violence in the context of state and group conflicts. This, as many philosophers who specialize in the ethics of conflict would argue, is a key factor in assessing the morality of the attack. In this context, some would argue, the attack must be subject to a nuanced analysis and cannot be simply categorized as immoral because people were killed and injured.

Those presenting a moral defense of the attack would most likely focus on the fact that Israel allegedly targeted members of Hezbollah as part of an ongoing conflict. A critic would point out that the explosive devices killed and injured people who were not members of Hezbollah, including children. Those defending the attack would point out that such collateral casualties are an acceptable part of conflict and note that a conventional military attack against Hezbollah (such as airstrikes) would have killed many more innocent people as well as causing property damage. That is, the use of pager bombs has a moral advantage over less focused attacks. One could also argue that the attack was directed against Hezbollah’s communication system and enemy communication systems are usually considered morally legitimate targets in conflict, even when targeting them kills people.

Those who see the attack as immoral would certainly focus on the fact that the bombs were detonated without those controlling them knowing who might get hurt. And, in fact, children and people who are not members of Hezbollah were harmed.  On this view, the attack could be seen as indiscriminate. Those defending the attack can, of course, point out the awful truth that attacks that are even more indiscriminate are often claimed to be morally acceptable. That is, we have a moral tolerance for collateral death and injury that makes the attack acceptable or perhaps even praiseworthy in its relative restraint compared to, for example, airstrikes against schools and hospitals that are claimed to target enemies.

One might also express moral concern about the means of the attack, that an exploding pager is a morally dubious weapon. While conventional weapons are indeed terrifying, transforming a mundane device like a pager into a weapon of war seems aimed at creating terror: you might think that perhaps any device at any time could kill you. Defenders of the attack might note that that same fear can be created by conventional means, such as airstrikes or artillery barrages that could happen at any time. There are also more general moral concerns about the implications of how the attack was possible.

While the details are not yet known, it seems most likely that Israel (allegedly) got control over part of the supply chain for the pagers and was able to install explosives. In addition to the practical concerns this raises, there are also moral concerns.

As experts have noted, this is the first large scale attack of its type. While the idea has been around a long time, this attack has put the concept into the world news and hence into the minds of people who could do the same thing. While such an operation would be challenging for small scale actors, it is obviously something that a state actor could do and is also within the means of a well-funded terrorist or criminal organization. As such, one moral harm of the attack is that the effectiveness of this means of attack has been proven and advertised. It is probably only a matter of time before similar attacks are launched. To help prevent this, companies will need to strengthen their supply chain security to prevent tampering, and efforts will need to be made to check devices to ensure they are safe.

But there is the obvious concern that companies could be in on such attacks and hence better supply chain security would not help when the threat is the company handling such security. It is also easy to imagine state actors using this method of attack.  I suspect that some people in the United States are now thinking that phones imported from China should be checked for explosives. Or worse, such as biological or chemical weapons concealed in devices. Imagine, as a horror scenario, a smart device that releases bacteria or viruses when sent the right command.

There is also some psychological harm as people are now probably a bit worried about their phones and other devices. While we did need to be concerned about our smart devices being compromised, we now need to think about the possibility of explosives in those devices. After all, it just requires a small amount of explosives and a data connection like wi-fi or a cell network to make almost any device into a remote-controlled bomb. This has been true for a long time, but now we not only know it can happen we feel it can happen because we have seen it. And that can cause fear. This is the type of attack that changes the shape of conflict.