The assassination of Iranian scientist Mohsen Fakhrizadeh might have been conducted by a remote-controlled weapon. While this was still a conventional assassination, it does raise the specter of autonomous assassination automatons or assassin bots. In this context, an assassin bot can conduct its mission autonomously once deployed. Simple machines of this kind already exist. Even a simple land mine can be considered an autonomous assassination device because once deployed it activates according to its triggering mechanism. But when one thinks of proper assassin bot, one thinks of a far more complicated machine that can seek and kill its target in a sophisticated manner. Also, it could be argued that a mine is not an assassination machine. While it can be placed in the hopes of killing a specific person, they do not seek a specific human target. As such, a proper assassin bot would need to be able to identify their target and attempt to kill them. To the degree that the bot can handle this process without human intervention it would be autonomous.
The idea of assassin bots roaming about killing people raises obvious moral concerns. While the technology would be new, there would be no new moral problems here, with one possible exception. The ethics of assassination involve questions about whether assassination is morally acceptable and debates over specific targets, motivations, and consequences. But unless the means of assassination is especially horrific or indiscriminate the means are not of special moral concern. What matters morally is that some means is used to kill, be those means a punch, a poniard, a pistol, or poison. To illustrate, it would be odd to say that killing Mohsen Fakhrizadeh with a pistol would be acceptable but killing him as quickly and painfully with a knife would be wrong. Again, methods can matter in terms of being worse or better ways to kill, but the ethics of whether it is acceptable to assassinate a person are distinct from the ethics of what means are acceptable. Because of this the use of assassin bots would be covered by established ethics an if assassination is wrong, then using robots would not change this. If assassination can be morally acceptable, then the use of robots would also not change this. Unless the robots killed in horrific or indiscriminate ways.
There seem to be two general ways to look at using assassin bots to replace human assassins. The first is that their use would remove the human assassin from the equation. To illustrate, a robot might be sent to poison a dissident rather than sending a human. As such, the moral accountability of the assassin would be absent, although the moral blame or praise would remain for the rest of the chain of assassination. Whether, for example, Vlad sent a human or a robot to poison a dissident Vlad would be acting the same from a moral standpoint.
The second is that the assassin bot does not remove the assassin from the moral equation, but it does change how the assassin does the killing. To use an analogy, if an assassin kills targets with their hands, then they are directly engaged in the assassination without the intermediary of a weapon. If an assassin uses a sniper rifle and kills the target from hundreds of yards away, they are still the assassin as they directed the bullet to the target. If the assassin sends an assassin bot to do the killing, then they have directed the weapon to the target and are the assassin. Unless the assassin bot is a moral agent and can be accountable in ways that a human can be, and a sniper rifle cannot. Either way, the basic ethics do not change. But what if humans are removed from the loop?
Imagine, if you will, algorithms of assassination encoded into an autonomous AI. This AI uses machine learning or whatever is currently in vogue to develop its own algorithms to select targets, plan their assassinations and deploy autonomous assassin bots. That is, once humans set up the system and give it basic goals the system operates on its own.
The easy and obvious moral assessment is that the people who set up the system would be accountable for what it does. Going back to the land mines, this system would be analogous to a very complicated land mine. While it would not be directly activated by a human, the humans involved in planning how to use it and in placing it would be accountable for the death and suffering it causes. Saying that the mine went off when it was triggered would not get them off the moral hook as the mine has no agency. Likewise, for the assassination AI because it would trigger based on its operating parameters, but humans would be accountable for what it does to the degree they were involved. Saying they are not responsible would be like the officer who ordered land mines placed on a road claiming that they are not accountable for the deaths of the civilians killed by those mines. While it could be argued that the accountability is different than that which would arise from killing the civilians in person with a gun or knife, it would be difficult to absolve the officer of moral responsibility. Likewise, for those involved in creating the assassin AI.
If the assassin AI developed moral agency, then this would have an impact on the matter because it would be an active agent and not merely a tool. That is, it would change from being like a mine to being like the humans in charge of deciding when and where to use mines. Current ethics can, of course, handle this situation: the AI would be good or bad in the same way a human would be in the same situation. Likewise, if the assassin bots had moral agency they would be analogous to human assassins.
The sword was a bit much. I suppose exaggeration for effect is OK. I doubt most people would notice.
It is an interesting, if terrifying thought. The morality issue would be of no importance to the killing machine, insofar as that machine would simply follow orders/directions. Not different to other machines who replace humans in doing work of various kinds. Such machines’, themselves, including our AI algorithms, have neither autonomy nor responsibility—although minds are currently discussing how we might effectively endow them with personality. So, matters of autonomy, responsibility, ethics and morality would continue to reside with humans. I can’t see where ordering an assassination by death bot varies from ordering one by a “hitman”. Or woman. RESPONSIBILITY for the killing still resides with the human contractor(s). The assassin is merely following orders.
No money changes hands or is lost, unless the death bot is vaporized by something, or someone. Now, we are entering the realm of science fiction. It was sure to get there…