In my previous essays on sexbots I focused on versions that are clearly mere objects. If the sexbot is merely an object, then the morality of having sex with it is the same as having sex with any other object (such as a vibrator or sex doll). As such, a human could do anything to such a sexbot without the sexbot being wronged. This is because such sexbots would lack the moral status needed to be wronged. Obviously enough, the sexbots of the near future will be in the class of objects. However, science fiction has routinely featured intelligent, human-like robots (commonly known as androids). Intelligent beings, even artificial ones, would seem to have an excellent claim on being persons. In terms of sorting out when a robot should be treated as person, the reasonable test is the Cartesian test. Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind is the ability to use true language. This notion was explicitly applied to machines by Alan Turing in his famous Turing test. The basic idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the test.
Crudely put, the idea is that if something talks, then it is reasonable to regard it as a person. Descartes was careful to distinguish between what would be mere automated responses and actual talking:
How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.
While Descartes does not deeply explore the moral distinctions between beings that talk (that have minds) and those that merely make noises, it does seem reasonable to regard a being that talks as a person and to thus grant it the moral status that goes along with personhood. This, then, provides a means to judge whether an advanced sexbot is a person or not: if the sexbot talks, it is a person. If it is a mere automaton of the sort Descartes envisioned, then it is a thing and would presumably lack moral status.
Having sex with a sexbot that can pass the Cartesian test would certainly seem to be morally equivalent to having sex with a human person. As such, whether the sexbot freely consented or not would be a morally important matter. If intelligent robots were constructed as sex toys, this would be the moral equivalent of enslaving humans for the sex trade (which is, of course, actually done). If such sexbots were mistreated, this would also be morally on par with mistreating a human person.
It might be argued that an intelligent robot would not be morally on par with a human since it would still be a thing. However, aside from the fact that the robot would be a manufactured being and a human is (at least for now) a natural being, there would be seem to be no relevant difference between them. The intelligence of the robot would seem to be what it important, not its physical composition.
It might also be argued that passing the Cartesian/Turing Test would not prove that a robot is self-aware and hence it would still be reasonable to hold that it is not a person. It would seem to be a person, but would merely be acting like a person. While this is a point well worth considering, the same sort of argument could be made about humans. Humans (sometimes) behave in an intelligent manner, but there is no way to determine if another human is actually self-aware. This is the classic problem of other minds: all I can do is see your behavior and infer that you are self-aware based on analogy to my own case. Hence, I do not know that you are aware since I cannot be you. From your perspective, the same is true about me. As such, if a robot acted in an intelligent manner, it would seem that it would have to be regarded as being a person on those grounds. To fail to do so would be a mere prejudice in favor of the organic.
In reply, some people believe that other people can be used as they see fit. Those who would use a human as a thing would see nothing wrong about using an intelligent robot as a mere thing.
The obvious response to this is to use reversing the situation: no sane person would wish to be treated as a mere thing and hence they cannot consistently accept using other people in that manner. The other obvious reply is that such people are simply evil.
Those with religious inclinations would probably bring up the matter of the soul. But, the easy reply is that we would have as much evidence that robots have souls as we do for humans having souls. This is to say, no evidence at all.
One of the ironies of sexbots (or companionbots) is that the ideal is to make a product that is as like a human as possible. As such, to the degree that the ideal is reached, the “product” would be immoral to sell or own. This is a general problem for artificial intelligence: they are intended to be owned by people to do onerous tasks, but to the degree they are intelligent, they would be slaves.
It could be countered that it is better that evil humans abuse sexbots rather than other humans. However, it is not clear that would actually be a lesser evil—it would just be an evil against a synthetic person rather than an organic person.