The Fox sci-fi buddy cop show Almost Human episode on sexbots inspired me to revisit the ethics of sexbots. While the advanced, human-like models of the show are still things of fiction, there is already considerable research and development devoted to creating sexbots. As such, it seems well worth considering the ethical issues involving sexbots real and fictional.
At this time, sexbots are clearly mere objects—while often made to look like humans, they do not have the qualities that would make them even person-like. As such, ethical concerns involving these sexbots would not involve concerns about wrongs done to such objects—presumably they cannot be wronged. One potentially interesting way to approach the matter of sexbots is to make use of Kant’s discussion of ethics and animals.
In his ethical theory Kant makes it quite clear that animals are means rather than ends. They are mere objects. Rational beings, in contrast, are ends. For Kant, this distinction rests on the fact that rational beings can (as he sees it) chose to follow the moral law. Animals, lacking reason, cannot do this. Since animals are means and not ends, Kant claims that we have no direct duties to animals. They are classified in with the other “objects of our inclinations” that derive value from the value we give them. Sexbots would, obviously, qualify as paradigm “objects of our inclinations.”
Interestingly enough, Kant argues that we should treat animals well. However, he does so while also trying to avoid ascribing animals themselves any moral status. Here is how he does it (or tries to do it).
While Kant is not willing to accept that we have any direct duties to animals, he “smuggles” in duties to them indirectly. As he puts it, our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing X would obligate us to that human, then an animal doing X would also create an analogous moral obligation. For example, a human who has long and faithfully served another person should not simply be abandoned or put to death when he has grown old. Likewise, a dog who has served faithfully and well should not be cast aside in his old age.
While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (that is, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to shoot the dog?
Kant’s answer seems to be rather consequentialist in character: he argues that if a person acts in inhumane ways towards animals (shooting the dog, for example) then his humanity will likely be damaged. Since, as Kant sees it, humans do have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act.
Interestingly enough, Kant discusses how people develop cruelty—they often begin with animals and then work up to harming human beings. As I point out to my students, Kant seems to have anticipated the psychological devolution of serial killers.
Kant goes beyond merely enjoining us to not be cruel to animals and encourages us to be kind to them. He even praises Leibniz for being rather gentle with a worm he found. Of course, he encourages this because those who are kind to animals will develop more humane feelings towards humans. So, roughly put, animals are essentially practice for us: how we treat them is training for how we will treat human beings.
In the case of the current sexbots, they obviously lack any meaningful moral status of their own. They do not feel or think—they are mere machines that might happen to be made to look like a human. As such, they lack all the qualities that might give them a moral status of their own.
Oddly enough, sexbots could be taken as being comparable to animals, at least as Kant sees them. After all, animals are mere objects and have no moral status of their own. Likewise for sexbots. Of course, the same is also true of sticks and stones. Yet Kant would never argue that we should treat stones well. Perhaps this would also apply to sexbots. That is, perhaps it makes no sense to talk about good or bad relative to such objects. Thus, a key matter to settle is whether sexbots are more like animals or more like stones—at least in regards to the matter at hand.
If Kant’s argument has merit, then the key concern about how non-rational beings are treated is how such treatment affects the behavior of the person engaging in said behavior. So, for example, if being cruel to a real dog could damage a person’s humanity, then he should (as Kant sees it) not be cruel to the dog. This should also extend to sexbots. For example, if engaging in certain activities with a sexbot would damage a person’s humanity, then he should not act in that way. If engaging in certain behavior with a sexbot would make a person more inclined to be kind to other rational beings, then the person should engage in that behavior. It is also worth considering that perhaps people should not engage in any behavior with sexbots—that having sex of any kind with a bot would be damaging to the person’s humanity.
Interestingly enough (or boringly enough), this sort of argument is often employed to argue against people watching pornography. The gist of such arguments is that viewing pornography can condition people (typically men) to behave badly in real life or at least have a negative impact on their character. If pornography can have this effect, then it seems reasonable to be concerned about the potential impact of sexbots on people. After all, pornography casts a person in a passive role viewing other people acting as sexual objects, while a sexbot allows a person to have sex with an actual sexual object.