Many years ago, the sci-fi buddy cop show Almost Human episode on sexbots inspired me to revisit the ethics of sexbots. While the advanced, human-like models of the show are still fictional, the technological foundations needed for sexbots do exist, as companies are manufacturing humanoid robots. As such, it seems well worth considering, once again, the ethical issues involving sexbots real and fictional.
At this time, sexbots are mere objects—while usually made to look like humans, they do not have the qualities that would make them even person-like. As such, ethical concerns about these sexbots do not involve concerns about wrongs done to the objects—presumably they cannot be wronged. But by using Kant’s discussion of ethics and animals, it is possible to build a moral view of even basic sexbots that are indisputably objects.
In his ethical theory Kant is clear that animals are means rather than ends and are mere objects. Rational beings, in contrast, are ends. For Kant, this distinction rests on the fact that rational beings can (as he sees it) choose to follow the moral law. Animals, lacking reason, cannot do this. Since animals are means and not ends, Kant claims we have no direct duties to animals. Despite being living beings, they are also just among the “objects of our inclinations” that derive value from the value we give them. Sexbots would, obviously, qualify as paradigm “objects of our inclinations.”
While it might seem odd, Kant argues that we should treat animals well. However, he does so while also trying to avoid giving animals any moral status of their own. Here is how he does it (or tries to do it).
While Kant is not willing to accept that we have direct duties to animals, he “smuggles” in duties to them indirectly. As he puts it, our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing something would obligate us to that human, then an animal doing a similar thing would also create an analogous moral obligation. For example, a human who has long and faithfully served another person should not simply be abandoned or put to death when he has grown old. Likewise, a dog who has served faithfully and well should not be cast aside in their old age.
While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (because, according to Kant, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to shoot a old dog that has become a burden?
Kant’s answer seems consequentialist in character: he argues that if a person acts in inhumane ways towards animals (shooting the dog, for example) then his humanity will probably be damaged. Since, as Kant sees it, humans have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act.
Kant discusses how people develop cruelty—they often begin with animals and then work up to harming human beings. As I point out to my students, Kant seems to have anticipated the psychological devolution of serial killers.
Kant goes beyond merely enjoining us to not be cruel to animals and encourages us to be kind to them. He even praises Leibniz for being gentle in his handling of a worm he found. Of course, he encourages this because those who are kind to animals will develop more humane feelings towards humans. So, roughly put, animals are practice for us: how we treat them habituates us in how we treat human beings.
Current sexbots obviously lack any meaningful moral status of their own. They do not feel or think—they are mere machines that might be made to look like humans. As such, they lack all qualities that might give them a moral status of their own.
Oddly enough, sexbots could be taken as being comparable to animals, at least as Kant sees them. After all, for him animals are mere objects and have no moral status of their own. Likewise for sexbots. Of course, the same is also true of sticks and stones. Yet Kant would never argue that we should treat stones well, and not just because he is very dead. This might also apply to sexbots. That is, perhaps it makes no sense to talk about good or bad relative to such objects. Thus, a key issue is whether sexbots are more like animals or more like stones—at least in terms of the matter at hand.
If Kant’s argument has merit, then the key concern about how non-rational beings are treated is how such treatment affects the behavior of the person involved. So, for example, if being cruel to a real dog could damage a person’s humanity, then he should (as Kant sees it) not be cruel to the dog. This should also extend to sexbots. For example, if engaging in certain activities with a sexbot would damage a person’s humanity, then he should not act in that way. If engaging in certain behavior with a sexbot would make a person more inclined to be kind to other rational beings, then the person should engage in that behavior. It is also worth considering that perhaps people should not engage in any behavior with sexbots—that having sex of any kind with a bot would be damaging to the person’s humanity.
Interestingly enough (or boringly enough), this sort of argument is often employed to argue against people watching pornography. The gist of such arguments is that viewing pornography can condition people (typically men) to behave badly in real life or at least have a negative impact on their character. If pornography can have this effect, then it seems reasonable to be concerned about the potential impact of sexbots on people. After all, pornography casts a person in a passive role viewing other people acting as sexual objects, while a sexbot allows a person to have sex with an actual sexual object.

Impudent inquiry:
Did, would or could have Emmanuel (spelling?) Kant have ever thought about sexbots? His time on Earth seems unfathomably early for that notion, Mary Shelley and Frankenstein, notwithstanding? Just asking, see…maybe I lack facts—if so, SOM—shame on me.
Thanks, Professor!
Interesting. Is; are; or could there be *sexbots* that embody the warm-blooded quality/quantity of consciousness? See, I did not say human consciousness, because that trait is, according to Edelman, higher order. Primary consciousness, if I understand it, remains a feature of living, warm blooded creatures: lions;tigers and, bears. Oh my… If a sexbot were part warm blooded; part machine,…cyborg, maybe—then a moral threshold approaches. I have not, knowingly, met any cyborgs…although, at times, have wondered about people I have known. Your posts stimulate my imagination…take me beyond the box. Always a pleasure, Professor!