- Image by Javier Piragauta via Flickr
One interesting phenomenon is that groups often adopt a set of stock views and arguments that are almost mechanically deployed to defend the views. In many cases, the pattern of responses seems almost robotic-in many “discussions” I can predict what stock arguments will be deployed next.
I have even found that if I can lure someone off their pre-established talking points, then they are often quite at a loss as to what to say next. This, I suspect, is a sign that a person does not really have his/her own arguments but is merely putting forth established dogmarguments (dogmatic arguments).
Apparently someone else noticed this phenomenon-specifically in the context of global warming arguments and decided to create his own argubot. Nigel Leck created a script that searches Twitter for key phrases associated with stock arguments against the view that humans have caused global warming. When the argubot finds a foe it then engages by sending a response tweet containing a counter to the argument (and relevant links).
In some cases the target of the argubot does not realize that s/he is arguing with a script and not a person. The argubot is set up to respond with a variety of “prefabricated” arguments when the target repeats an argument, thus helping to create that impression. The argubot also has a repertoire that goes beyond global warming. For example, it is stocked with arguments about religion. This also allows it to maintain the impression that it is a person.
While the argubot is reasonably sophisticated, it is not quite up to the Turing test. For example, it cannot discern when people are joking. While it can fool people into thinking they are arguing with a person, it is important to note that the debate takes place in the context of Twitter. As such, each tweet is limited to 140 characters. This makes it much easier for a argubot to pass itself off as a person. Also worth considering is the fact that people tend to have rather low expectations for the contents of tweets which makes it much easier for an argubot to masquerade as a person. However, it is probably just a matter of time before a bot passes the Tweeter Test (being able to properly pass itself off as person in the context of twitter).
What I find most interesting about the argubot is not that it can often pass as a human tweeter, but that the argumentative process with its targets can be automated in this manner. This inclines me to think that the people who the argubot are arguing with are also, in effect, argubots. That is, they are also “running scripts” and presenting pre-fabricated arguments they have acquired from others. As such, it could be seen as a case of a computer based argubot arguing against biological argubots with both sides relying on scripts and data provided by others.
It would be interesting to see the results if someone wrote another argubot to engage the current argubot in debate. Perhaps in the future argumentation will be left to the argubots and the silicon tower will replace the ivory tower. Then again, this would probably put me out of work.
One final point worth considering is the ethics of the argubot at hand.
One concern is that it seems deceptive: it creates the impression that the target is engaged in a conversation with a person when s/he is actually just engaged with a script. Of course, the argubot does not state that it is a person nor does it make use of deception to harm the target. Given its purpose, to argue about global warming, it seems to be irrelevant whether the arguing is done by a person or a script. This contrasts with cases in which it does matter, such as a chatbot designed to trick someone into thinking that another person is romantically interested in them or to otherwise engage with the intent to deceive. As such, the argubot does not seem to be unethical in regards to fact that people might think it is a person.
Another concern is that the argubot seeks out targets and engages them (an argumentative Terminator or Berserker). This, some might claim, could be seen as a form of spamming or harassment.
As far as the spamming goes, the argubot does not deploy what would intuitively be considered spam in terms of its content. After all, it is not trying to sell a product, etc. However, it might be argued that it is sending out unsolicited bulk tweets, which might thus be regarded as spam. Spamming is rather well established as immoral (if an argument is wanted, read “Evil Spam” in my book What Don’t You Know? ) and if the argubot is spamming, then this would be unethical.
While the argubot might seem like a spambot, one way to defend it against this charge is to note that the argubot provides what are mostly relevant responses that are comparable to what a human would legitimately send in response to a tweet. Thus, while it is automated, it is arguing rather than spamming. This seems to be an important distinction. After all, the argubot does not try to sell male enhancement, scam people, or get people to download a virus. Rather, it responds to arguments that can be seen as inviting a response-be it from a person or a script.
In regards to the harassment charge, the argubot does not seem to be engaged in what could be legitimately considered harassment. First, the content does not seem to constitute harassment. Second, the context of the “debate” is a public forum (Twitter) that explicitly allows such interactions to take place-whether they involve just humans or humans and bots.
Obviously, an argubot could be written that would actually be spamming or engaged in harassment. However, this argubot does not seem to cross the ethical line in regards to this behavior.
I suspect that we will see more argubots soon.