In my previous essays on sexbots I focused on versions that are mere objects. If a sexbot is merely an object, then the morality of having sex with it is the same as having sex with any other object (such as a vibrator or sex doll). As such, a human could do anything to such a sexbot without the sexbot being wronged. This is because such sexbots lack the moral status needed to be wronged. The sexbots of the near future will, barring any sudden and unexpected breakthroughs in AI, still be objects. However, science fiction includes intelligent, human-like robots (androids). Intelligent beings, even artificial ones, would seem likely to be people. In terms of sorting out when a robot should be treated as person, one test is the Cartesian test. Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind is the ability to use true language. This notion was explicitly applied to machines by Alan Turing in his famous Turing test. The idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the test.
Crudely put, the idea is that if something talks, then it is reasonable to regard it as a person. Descartes was careful to distinguish between what would be mere automated responses and actual talking:
How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.
While Descartes does not deeply explore the moral distinctions between beings that talk (which have minds on his view) and those that merely make noises, it does seem reasonable to take a being that talks as a person and grant it the appropriate moral status This provides a means to judge whether an advanced sexbot is a person: if the sexbot talks, it is a person. If it is a mere automaton of the sort Descartes envisioned, then it is a thing and would lack moral status.
Having sex with a sexbot that can pass the Cartesian test would seem morally equivalent to having sex with a human person. As such, whether the sexbot freely consented would be morally important. If intelligent robots were constructed as sex toys, this would be the moral equivalent of enslaving humans for the sex trade (which is done). If such sexbots were mistreated, this would be morally on par with mistreating a human person.
It might be argued that an intelligent robot would not be morally on par with a human since it would still be a thing. However, aside from the fact that the robot would be a manufactured being and a human is (at least for now) a natural being, there would be seem to be no relevant difference between them. The intelligence of the robot would seem to be what it important, not its physical composition. That is, it is not whether one is made of silicon or carbon that matters.
It might be argued that passing the Cartesian/Turing Test would not prove that a robot is self-aware and it would still be reasonable to hold that it is not a person. It would seem to be a person but would merely be acting like a person. While this is worth considering, the same sort of argument can be made about humans. Humans (sometimes) behave in an intelligent manner, but there is no way to determine if another human is actually self-aware. This is the problem of other minds: I can see your behavior but must infer that you are self-aware based on an analogy to myself. Hence, I do not know that you are aware since I am not you. And, unlike Bill Clinton, I cannot feel your pain. From your perspective, the same is true about me: unless you are Bill Clinton, you cannot feel my pain. It such, if a robot acted in an intelligent manner, it would have to be classified as being a person on these grounds. To fail to do so would be a mere prejudice in favor of the organic over the electronic.
In reply, some people believe other people should be used as objects. Those who would use a human as a thing would see nothing wrong about using an intelligent robot as a mere thing.
The obvious response to this is to use reversing the situation: no sane person would wish to be treated as a mere thing and hence they cannot consistently accept using other people in that manner. The other obvious reply is that such people are evil.
Those with religious inclinations would probably bring up the matter of the soul. But the easy reply is that we will have as much evidence that robots have souls as we now do for humans having souls. This is to say, no evidence at all.
One of the ironies of sexbots (or companionbots) is that the ideal is to make a product as a human as possible. As such, to the degree that the ideal is reached, the “product” would be immoral to sell or own. This is a general problem for artificial intelligence: they are intended to be owned by people to do usually onerous tasks, but to the degree they are intelligent, they would be slaves. And enslavement is wrong.
It could be countered that it is better that evil humans abuse sexbots rather than other humans. However, it is not clear that would be a lesser evil—it would just be an evil against a synthetic person rather than an organic person.

”… a human could do anything to such a sexbot without the sexbot being wronged. This is because such sexbots lack the moral status ….”
It seems to me that this is just an excuse to free myself of MY own moral status: for that doesn’t depend on anything external to me, but my own self. If I am committing a depraved action, it is still a depraved action. If I commit it against a human, it harms them too and the damage is much greater in that it involves not only me, but someone else.
I think this is what Socrates meant about people doing bad things and harming their own selves. But of course, only an highly moral person would think like that.
I would not see a problem having sex with a ‘sexbot’, in the sense that it’s done in moderation, to free oneself of sexual urges. Not much different than watching porn while masturbating, and getting on with the rest of the day. It’s done just to get personal relief. But I don’t watch illegal porn. In fact, I don’t even watch hard core porn, but mostly just beautiful women. But I am not obsessed with that.
So a sexbot would be a ‘sophisticated sex toy’, I guess. However, it is unnecessary: there’s already ways for me to get rid of my sexual urge, harmlessly, and what’s even better, it saves me money, time, delusions, etc etc, for I have virtually no interest in running after a woman; to me it’s not worth it at all, and even the ‘opportunities’ I had that I have ‘lost’, I don’t regret that at all, quite the contrary, I probably saved myself from adding to bad memories of other people.
But to conclude, to me all this AI, sexbot, etc crap is unnecessary. And whatever is unnecessary, is actually bad. As Epictetus puts it: ‘The philosopher doesn’t move a finger unless necessary.’. I really believe that, and I think this guideline actually solves a lot of problems that should not even exist at all.
For all the junk we create, merely adds to the complexity of this world and our lives. We never needed about 80 per cent of all the junk we create in order to sell it and make money. We didn’t even need that money in the first place, but as always, greed and ‘boundless egoism’ lives and festers in us. And that’s the real problem.
The story about Socrates arriving at the market, and, looking around, shakes his head and says ‘How much stuff I don’t need!’, always comes to my mind. Thank you for your essay!
sorry about the grammatical errors: I start to edit my thoughts and then get tired and leave stuff half way, so it becomes a mess, in places.
I could not access your contretemps question : are sexbots people? It is 9:18, am, on Saturday and internet traffic must be buzzing busily, as usual. The question, to me, gets to the crux of this matter. The way I view it is: if it is not flesh and blood, of some sort, it is neither human, nor animal—nor, vegetable for that matter. Living tissue distinctly distinguishes its self this way in a manner which mechanics, robotics and, now, AI, cannot. This is a grave new world, Professor, and I have never been keen on moral relativism. Even plants, for example, exhibit rudimentary consciousness. When we were quite young, my brother and I had a pet duck…no dogs or cats, see, just a duck, who would wait until he saw a large, orange vehicle, bringing us home from school. He would then waddle down our driveway to greet us. Schoolmates, on the bus, made fun of that. To those, who had dogs or cats as pets/companions, this was amusing and curious. Living tissue distinguishes itself. Even when it is only a duck, masquerading (?) as a pet and companion. And, no, we did not molest the duck. That was never an issue. As far as we knew, he died of old age. Unusual for a domestic duck. My family was always unusual—a little beyond mainstream. There it is.
It seems to me your conclusion to this post is, uh, conclusive. The gulf between AI and human consciousness is, in my view, wide. In my primitive, anthropocentric way, I hope I am right. Selfish of me. My fifth-generation tablet—a pretty useful machine—did not have “anthropocentric” in its’ dictionary.
Some “anthro” must have thought about that, while another anthro might have said: let’s not go there. Just so. Here is where cynicism hits fourth, fifth or sixth gear. I have spent about ten thousand dollars, out-of-pocket, on healthcare this year, because, I have not put money into a government-ran plan or something else. Veiled assurances do not encourage or impress me. It is cleverly crafted and sanctioned scam. I still pay attention. Too many folks don’t. They fell helpless—trapped in a system. They are. Thinkers I have known, left the USA, because they rejected advancing government clampdowns. Some made good decisions; others , not so much. As I have claimed: not only do we not always get what we want; we don’t always want what we get. Exactly.