Small. Silent. Deadly. The perfect assassin or security system for the budget conscious. Send a few after your enemy. Have a few lurking about in security areas. Make your enemies afraid. Why drop a bundle on a bug, when you can have a Tarantula?
-Adrek Robotics Mini-Cyberform Model A-2 “Tarantula” sales blurb, Chromebook Volume 3.
The idea of remote controlled (or autonomous) mechanical assassins is an old one in science fiction. The first time I read about such a device was in Frank Herbert’s Dune: he came up with the idea for a lethal, remote-operated drone known as a hunter seeker. This nasty machine would be guided to a target and kill her with a poison needle. This idea stuck with me and, when I was making Ramen noodle money writing game material, I came up with (and sold) the idea for three remote controlled killers produced by my rather evil, but imaginary, company called Adrek Robotics. These included the spider like Tarantula, the aptly named Centipede and the rather unpleasant Beetle. These killers were refined versions of machines I had deployed, much to the horror of my players, in various Traveller campaigns in the 1980s (to this day, one player carefully checks his toilet before using it).
These machines, in my fictional worlds, work in a fairly straightforward manner. They are relatively small robots that are armed with compact, but lethal and vicious, weapon systems (such as poison injecting needles). These machines can operate autonomously, or as the description in Chromebook Volume 3 notes, for particularly important missions they can be remotely controlled by a human or AI. Their small size allows them to infiltrate and kill (or spy). Not surprisingly, various clever ways were developed to get them close to targets, ranging from mailing them concealed among parts to hiding them in baked goods.
While, as far as I know, no real company is cranking out actual Tarantulas, the technology does exist to create a basic model of my beloved killer spider. As might be imagined, these sort of little assassins raise a wide variety of concerns.
Some of these concerns are practical matters relating to law enforcement, safety and military operations. Such little assassins would presumably be easy to deploy against specific targets (or random targets when used as weapons of terror—imagine knowing that a killer machine could pop out of your donut or be waiting in your toilet) and they could be difficult or impossible to trace. Presumably governments, criminals and terrorists would not include serial numbers or other identifying marks on their killers (unless they wanted to take credit).
Obviously enough, people can already kill each other very easily. What these machines would change would be that they would allow anonymous killing from a distance with, as the technology was developed, at very low cost. It is the anonymous and low-cost aspects that are the most worrisome in regards to maintaining safety. After all, what often deters individuals and groups from engaging in bad behavior is fear of being caught and being subject to punishment. What also deters people is the cost of engaging in the misdeed. Using a terrorism example, sending human agents to the United States to commit terrorist acts could be costly and risky. Secreting some little assassins, perhaps equipped to distribute a highly infectious disease, in a shipping container could be rather cheap and without much risk.
There are also moral concerns. In general, the ethics of using little assassins to murder people is fairly clear—it falls under the ethics of murder and assassin. That is, it is generally wrong. There are, of course, the stock moral arguments for assassination. Or, as some prefer to call it, targeted killing.
One moral argument in favor of states employing little assassins is based on their potential precision. Currently, the United States engages in targeted killing (or assassination) using missiles fired from drones. While this is morally superior to area bombing (since it reduces the number of civilians slaughtered and the collateral damage to property), a little assassin would be even better. After all, a properly targeted or guided little assassin would kill only the target, thus avoiding all collateral damage and the slaughter of civilians. Of course, there is still the broader ethical concern about states engaging in what can be justly described as assassination. But, this issues is distinct from the specific ethics of little assassins.
Somewhat oddly, the same argument can be advanced in favor of criminal activities—while such activities would be wrong, a precise kill would be morally preferable to, for example, bullets being sprayed into a crowd from a passing car.
In addition to the ethics of using such machines, there is also the ethics of producing them. It is easy enough to imagine harmless drones being modified for lethal purposes (for example, a hobby drone with a homemade bomb attached). In such cases, the manufacturer would be no more morally culpable than a car manufacturer whose car was used to run someone over. It is also easy to imagine lethal drones being manufactured—since that is already being done.
While civilians can buy a variety of weapons, it seems likely that it will be hard to justify civilian sales of lethal drones. After all, they do not seem to be needed for legitimate self-defense, for hunting or for legitimate recreational activity (although piloting a drone in a recreational dogfight would probably be awesome). However, being a science fiction writer, I can easily imagine the NRA pushing hard against laws restricting the ownership of lethal drones. After all, the only thing that can stop and evil guy with a lethal drone is a good guy with a lethal drone. Or so it might be claimed.
Although I do dearly love my little assassins, I would prefer that they remain in the realm of fiction. However, if they are not already being deployed, it is but a matter of time. So, check your toilet.
Reblogged this on newauthoronline and commented:
What happens when science fiction becomes reality? An interesting post which raises ethical issues. Kevin
Michael LaBossiere says
“Erich Fromm states that “there is good clinical evidence for the assumption that destructive aggression occurs, at least to a large degree, in conjunction with a momentary or chronic emotional withdrawal.” The situations described above represent a breakdown in the psychological distance that is a key method of removing one’s sense of empathy and achieving this “emotional withdrawal.” Again, some of the mechanisms that facilitate this process include:
• Cultural distance, such as racial and ethnic differences, which permit the killer to dehumanize the victim
• Moral distance, which takes into consideration the kind of intense belief in moral superiority and vengeful/vigilante actions associated with many civil wars
• Social distance, which considers the impact of a lifetime of practice in thinking of a particular class as less than human in a socially stratified environment
• Mechanical distance, which includes the sterile Nintendo-game unreality of killing through a TV screen, a thermal sight, a sniper sight, or some other kind of mechanical buffer that permits the killer to deny the humanity of his victim”
(Grossman, D. (2009). On Killing: The Psychological Costs of Learning to Kill in War and Society (emphasis added)
Michael LaBossiere says
One of the first discussions I saw about killing at a distance was about the use if bombers and artillery. But, those methods do put the soldier in the action, at least potentially.
Interestingly, I heard a piece about drones on NPR and a military drone operator noted that operators did suffer from something like ptsd.
I would guess that simply sending off a kill bot would have even less impact on a person, since she would not even be guiding the machine on its kill.
Distance makes a difference when it comes to killing. I read Grossman’s book years ago and thought it was important reading. If you’re too far away you don’t get blood and brains splattered on you it makes a difference, psychologically.