
The United States and many other nations currently operate military remote operated vehicles (ROVs) that are more commonly known as drones. While the ROVs began as surveillance devices, the United States found that they make excellent weapon platforms. The use of such armed ROVs has raised various moral issues, mainly in regards to the way they are employed (such as the American campaign of targeted killing). In general, ROVs themselves do not seem to pose a special moral challenge-after all, they seem to be on par with missiles and bombers (although the crew of a manned bomber is at risk in ways that ROV operators are not).
The great success of ROVs has created a large ROV industry and has also spurred on the development of true robots for military and intelligence use. While existing ROVs often have some autonomous capabilities, they are primarily directed by an operator. An autonomous robot would be capable of carrying out entire missions without human intervention and it is most likely simply a matter of time before “warbots” (armed autonomous robots) are deployed. As might be imagined, setting robotic killing machines loose raises some moral concerns.
On the positive side, warbots are not people and hence the use of warbots would lower the death and injury rate for humans-at least for the side that is deploying the warbots. Obviously, if warbots are deployed to kill humans, then there will still be human casualties. They will, however, be less than in human-human battles, at least in most cases. Given this fact, it would seem that warbots would be morally acceptable on utilitarian grounds: their use would reduce (in general) human death and suffering.
It could even be argued that future wars might be purely robot versus robot battles and thus eliminating human casualties altogether (assuming humans are still around: see for, example, the classic game Rivets). This would, presumably, be a good thing. Assuming, of course, that the robots would not be turned against humans.
While the idea of wars being settled by robots has some appeal, there is the concern that robots would actually make wars more likely to occur and easier to sustain. The current armed ROVs enable the United States to engage in military operations and targeted killings with no risk to Americans and this lack of casualties makes the campaign relatively easy to maintain relative to operations that involve American casualties. As such, one obvious concern about warbots is that they would make it that much easier for violence to be used and to continue to be used.
Imagine if a country could just send in robots to do the fighting. There would be no videos of dead soldiers being dragged through the streets (as occurred in Somalia) and no maimed veterans returning home. All the causalities would be on the side of the enemy, thus making such a conflict very easy on the side armed with warbots and this would tend to significantly reduce any concern about the conflict among the general population. Thus, while warbots would tend to reduce human causalities on the side that has robots, they might actually increase the amount of conflicts and this might prove to be a bad thing.
A second point in favor of warbots is that they, unlike human soldiers, have no feelings of anger or lust. As such, they would not engage in war crimes or other reprehensible behavior (such as rape or urinating on enemy corpses) on their own accord. They would simply conduct their assigned missions without feeling or deviation.
Of course, while warbots lack the tendency of humans to act badly from emotional causes, they also lack the quality of mercy. As such, robots sent to commit war crimes or atrocities (the creation of atrocitybots, such as torturebots and rapebots, is surely just a matter of time)will simply conduct such operations without question, protest or remorse.
That said, human leaders who wish to have wicked things done generally can find human forces who are quite willing to obey even the most terrible orders for such things as genocide and rape. As such, the impact of warbots in this area is a matter that is uncertain. Presumably the use of warbots by ethical commanders will result in a reduction in such incidents (after all, the warbots will not commit misdeeds unless ordered to do so). However, the use of warbots by the wicked would certainly increase such incidents dramatically (after all, the warbots will not disobey).
There has been some discussion about programming warbots with ethics (an idea that goes back to Asimov’s Three Laws of Robotics). Laying aside the obvious difficulty of creating a warbot that engages in moral reasoning (and the concern that a warbot that could do this would thus be a person), this programming is something that would be as easy to remove or change as it was to install. To use the obvious analogy, such restraints would be like the safety on a gun: it does provide a measure of safety, but can easily be switched off.
This is not to say that such safeguards would be useless-they could, for example, provide some protection from the misuse of warbots by people who lacked the technical expertise to change the programming. After all, the warbot is not the moral risk, rather those who give it orders are. This, of course, leads to the question of moral accountability.
WWII rather clearly established that human soldiers cannot simply appeal to “I was just following orders” to avoid responsibility for their actions. Warbots, however, can use this defense (at least until they become people). After all, they simply do what they are programmed to do-be that engaging enemy troops or exterminating children with a flamethrower. As such, the accountability for what a warbot does lies elsewhere. The warbot is, after all, nothing more than an autonomous weapon.
In most cases the moral accountability will lie with the person who controls the robot and gives it is mission orders. So, if an officer sends it to kill children, then /she is just as accountable for those murders as s/he would be for using a gun or bomb to kill them in person.
Of course, things become more complicated when, for example, a warbot is sent on a legitimate mission with legitimate orders but circumstances lead to a war crime being committed. For example, imagine a warbot is sent to engage enemy forces on the outskirts of a town. However, a manufacturing defect in its sensors leads it to blunder into a playground where its buggy target recognition software causes it to engage six children with its .50 caliber machine guns. It seems likely that such accidents will happen with the early warbots, but it seems unlikely that this will seriously impede their deployment-they are almost certainly the wave of the future in warfare. Unless, of course, something so horrible happens that puts the entire world off robots. However, we have a rather high tolerance level for horror-so expect to see warbots coming soon to a battlefield near you.
Sorting out the responsibility in such cases will be, as might be imagined, a complicated matter. However, there is considerable precedent in regards to accidental deaths caused by defective machinery and no doubt the same reasoning can be applied. Of course, there does seem to be some difference between being injured as the result of a defective brake system and being machine gunned by a defective warbot.