There are justified concerns that AI tools are useful for propagating conspiracy theories, often in the context of politics. There are the usual fears that AI can be used to generate fake images, but a powerful feature of such tools is they can flood the zone with untruths because chatbots are relentless and never grow tired. As experts on rhetoric and critical thinking will tell you, repetition is an effective persuasion strategy. Roughly put, the more often a human hears a claim, the more likely it is they will believe it. While repetition provides no evidence for a claim, it can make people feel that it is true. Although this allows AI to be easily weaponized for political and monetary gain, AI also has the potential to fight belief in conspiracy theories and disinformation.

While conspiracy theories have existed throughout history, modern technology has supercharged them. For example, social media provides a massive reach for anyone wanting to propagate such a theory. While there are those who try to debunk conspiracy theories or talk believers back into reality, efforts by humans tend to have a low success rate. But AI chatbots seem to have the potential to fight misinformation and conspiracy theories. A study led by Thomas Costello, a psychologist at American University, provides some evidence that a properly designed chatbot can talk some people out of conspiracy theories.

One advantage chatbots have over humans in combating conspiracy theories and misinformation is, in the words of Kyle Reese in Terminator, “It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.” While we do not want the chatbots to cause death, this relentlessness enables a chatbot to counter the Gish gallop (also known as the firehose of falsehoods) strategy. This involves trying to overwhelm an opponent by flooding them with claims without concern about their truth and arguments without concern with their strength. The flood is usually made of falsehoods and fallacies. While this strategy has no logical merit, it can have considerable psychological force. For those who do not understand the strategy, it will appear like the galloper is winning, since the opponent cannot refute all the false claims and expose all the fallacies.  The galloper will also claim they have “won” any unrefuted claims or arguments. While it might seem odd, a person can Gish gallop themselves: they will feel they have won because their opponent has not refuted everything. As would be expected, humans are exhausted by engaging with a Gish gallop and will often give up. But, like a terminator, a chatbot will not get tired or bored and can engage a Gish gallop as long as it is galloping. But there is the question of whether this ability to endlessly engage is effective.

To study this, the team recruited 2000 participants who self-identified as believing in at least one conspiracy theory. These people engaged with a chatbot on a conspiracy theory and then self-evaluated the results of the discussion. On average, the subjects claimed their confidence was reduced by 20%. These results apparently held for at least two months and applied to a range of conspiracy theory types. This is impressive, as anyone who has tried to engage with conspiracy theorists will attest.

For those who teach critical thinking one of the most interesting results is that when they tested the chatbot with and without fact-based counter arguments, only the use of the fact-based counter arguments was successful. This is striking since, as Aristotle noted long ago in his discussion of persuasion, facts and logic are usually the weakest means of persuasion. At least when used by humans.

While the question of why the chatbots proved much more effective than humans, one likely explanation is that chatbots, like terminators, do not feel. As such, a chatbot will (usually) remain polite and not get angry or emotional during the chat. It can remain endlessly calm.

Another suggested factor is that people tend not to feel judged by a chatbot and are less likely to feel that they would suffer some loss of honor or face by changing their belief during the conversation. As the English philosopher Thomas Hobbes noted in his Leviathan, disputes over beliefs are fierce and cause great discord, because people see a failure to approve as a tacit accusation that they are wrong and “to dissent is like calling him a fool.” But the chatbot will not feel the same as a human opponent, as there is no person to lose to.

This is not to say that humans cannot be enraged at computers, after all rage induced by video games is common. It seems likely that the difference lies in the fact that such video games are a form of competition between a human and the computer while the chatbots in question are not taking a competitive approach. In gaming terms, it is more like chatting with a non-hostile NPC and not like trying to win a fight in the legendarily infuriating  Dark Souls.

Yet another factor that might be involved was noted by Aristotle in his Nicomachean Ethics: “although people resent it when their impulses are opposed by human agents, even if they are in the right, the law causes no irritation by enjoining decent behavior.” While Aristotle’s claim can be disputed, this does match up with the findings in the study. While the chatbot is not the law, people recognize that it is a non-human creation of humans and it lacks the qualities that humans possess that would tend to irritate other humans.

While the effectiveness of chatbots needs more study, this does suggest a good use for AI. While conspiracy theorists and people who believe disinformation are unlikely to do a monthly checkup with an AI to see if their beliefs hold up to scrutiny, anti-conspiracy bots could be deployed by social media companies to analyze posts and flag potential misinformation and conspiracy theories. While some companies already flag content, people are unlikely to doubt the content just because of the flag. Also, many conspiracy theories exist about social media, so merely flagging content might serve to reinforce belief in such conspiracy theories. But a person could get drawn into engaging with a chatbot and it might be able to help them engage in rational doubt about misinformation, disinformation and conspiracy theories.  

Such chatbots would also be useful to people who are not conspiracy theorists and want to avoid such beliefs as well as disinformation. Trying to sort through claims is time consuming and exhausting, so it would be very useful to have bots dedicated to fighting disinformation. One major concern is determining who should deploy such bots, since there are obvious concerns with governments and for-profit organizations running them, since they have their own interests that do not always align with the truth.

Also of concern is that even reasonable objective, credible organizations are distrusted by the very people who need the bots the most. And a final obvious concern is the creation of “Trojan Horse” anti-conspiracy bots that are actually spreaders of conspiracy theories and disinformation. One can easily imagine a political party deploying a “truth bot” that talks people into believing the lies that benefit that party.

In closing, it seems likely that the near future will see a war of the machines, some fighting for truth and others serving those with an interest in spreading conspiracy theories and disinformation

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>