Peaceful protest is an integral part of America. As is murder. Back in 2016 the two collided in Dallas, Texas: after a peaceful protest, five police officers were murdered. While some might see it as ironic that police rushed to protect people protesting police violence, this reminds us about how police are supposed to function in a democratic society. This stands in stark contrast with the unnecessary deaths inflicted on citizens by bad officers, deaths that once caused the nations to briefly consider that such deaths might be worth preventing.

While violence and protests are worthy of in-depth discussion, my focus will be on the ethical questions raised by the use of a robot to deliver the explosive device was used to kill one of the attackers. While this matter was addressed by philosophers more famous than I, I thought it worthwhile to look back to 2016 to see if my thoughts have changed.

While the police robot is called a robot, it is more accurate to say it is a remotely operated vehicle. After all, the term “robot” implies some autonomy on the part of the machine. The police robot is remote controlled, like a sophisticated version of RC toys. In fact,  one could do the same thing  by putting an explosive on a toy.

Since there is a human operator directly controlling the machine, the ethics of the matter are the same ass if  conventional machines of death (such as a gun) had been used to kill the shooter. On the face of it, the only difference is in perception: a killer robot delivering a bomb sounds more ominous and controversial than an officer using a firearm. The use of remote-controlled vehicles to kill targets was nothing new as the basic technology has been around since at least WWII and the United States has killed many people with drones.

If this had been the first case of an autonomous police robot sent to kill (like an ED-209), then the issue would be different. However, it is a case that falls under established ethics of killing, only with a slight twist in regards to the delivery system. That said, it can be argued that the use of a remote-controlled machine is a morally relevant change.

Keith Abney raised a very reasonable point: if a robot could be sent to kill a target, it could also be sent to use non-lethal force to subdue the target. In the case of human officers, the usual moral justification of lethal force is that it is the best option for protecting themselves and others from a threat. If the threat presented by a suspect can be effectively addressed in a non-lethal manner, then that is the option that should be used. The moral foundation for this is set by the role of police in society: they are supposed to protect the public and should take every legitimate effort to deliver suspects for trial. They are not supposed to function as soldiers sent to defeat enemies. There are, of course, cases in which suspects cannot be safely captured and lethal force can be justified. A robot (or, more accurately, a remote-controlled machine) can radically change the equation.

While a police robot is an expensive piece of hardware, it is not a human being (or even an artificial being). As such, it only has the moral status of property. In contrast, even the worst human criminal is a human being and thus has a moral status above that of an object. If a robot is sent to engage a human suspect, then in many circumstances there would be no moral justification for using lethal force. After all, the officer operating the machine is in no danger. This should change the ethics of the use of force to match other cases in which a suspect needs to be subdued but presents no danger to the officer attempting arrest. In such cases, the machine should be outfitted with less-than-lethal options.

While television and movies make subduing someone safely seem easy, it is difficult to do. For example, the classic rifle butt to the head is a fictional favorite for knocking someone out, when doing that in the real world would cause serious injury or even death. Tasers, gas weapons and rubber bullets also can cause serious injury or death. However, the less-than-lethal options are less likely to kill a suspect and thus allow them to be captured for trial, which is supposed to be the point of law enforcement. Robots could be designed to both withstand gunfire and securely grab a suspect. While this is likely to result in injury (such as broken bones) and could kill, it would be less likely to kill than a bomb. An excellent example of a situation in which a robot would be ideal would be to capture an armed suspect barricaded in a structure.

It must be noted that there will be cases in which the use of lethal force via a robot is justified. These would include cases in which the suspect presents a clear and present danger to officers or civilians and the best chance of ending the threat is the use of such force. An example of this might be a hostage situation in which the hostage taker is likely to kill hostages while the robot is trying to subdue them with less-than-lethal force.

While police robots have long been the stuff of science fiction, they do present a potential technological solution to the moral and practical problem of keeping officers and suspects alive. While an officer might be legitimately reluctant to stake her life on less-than-lethal options when directly engaged with a suspect, an officer operating a robot faces no such risk. As such, if the deployment of less-than-lethal options via a robot would not put the public at unnecessary risk, then it would be morally right to use such means.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>