Thanks to improvements in medicine humans are living longer and can be kept alive beyond when they would naturally die. On the plus side, longer life is generally good. On the downside, this longer lifespan and medical intervention mean that people will often need extensive care in their old age that can be a burden on caregivers. Not surprisingly, there has been an effort to solve this problem with companion robots.

While current technology is crude, it has potential and there are advantages to robot caregivers. The most obvious are that robots do not get tired, do not get depressed, do not get angry, and do not have any other responsibilities. As such, they can be 24/7/365 caregivers. This makes them superior to human caregivers who get tired, depressed, get angry and have many other responsibilities.

There are, of course, concerns about using robot caregivers, such as about their safety and effectiveness. In the case of caregiving robots that are intended to provide companionship and not just medical and housekeeping services, there are both practical and moral concerns.

There are at least two practical concerns regarding the companion aspect of such robots. The first is whether a human will accept a robot as a companion. In general, the answer seems to be that most humans will.

The second is whether the AI software will be advanced enough to read a human’s emotions and behavior to generate a proper emotional response. These responses might or might not include conversation. After all, many people find non-talking pets to be good companions. While a talking companion would, presumably, need to eventually be able to pass the Turing Test, they would also need to pass an emotion test. They would need to read and respond correctly to human emotions. Since we humans often fail this test, this allows for a broad margin of error. These practical concerns can be addressed technologically as they are a matter of software and hardware. Building a truly effective companion robot might require making them very much like living things. The comfort of companionship might be improved by such things as smell, warmth and texture. That is, to make the companion reassuring to all the senses.

While the practical problems can be solved with the right technology, there are moral concerns about the use of robot caregiver companions. One is about people handing off their moral duties to care for family members, but this is not specific to robots. After all, a person can hand off their duties to another person, and this would raise a similar issue.

As far as those specific to companion robots, there are moral concerns about the effectiveness of the care. Are robots good enough at their jobs that trusting the lives of humans to them  would be morally responsible? While that question is vitally important, a rather intriguing moral concern is that robot companions are a deceit.

Roughly put, the idea is that while a companion robot can simulate human emotions via cleverly written algorithms to respond to what its “emotion recognition software” detects, these response are not genuine. While a robot companion might say the right things at the right times, it does not feel and does not care. It merely engages in mechanical behavior in accord with its software. As such, a companion robot is a deceit, and such a deceit seems morally wrong.

One obvious response is that even if people know the robot does not really experience emotions, they can still gain value from its “fake” companionship. People often find stuffed animals emotionally reassuring even though they know they are just fabric stuffed with fluff. What matters, it could be argued, is the psychological effect. If someone feels better with a robotic companion around, then that is morally fine. Another obvious analogy is the placebo effect: medicine need not be real to be effective.

It might be objected that there is still an important moral concern here: a robot, however well it fakes being a companion, does not suffice to provide the companionship a person is morally entitled to. Roughly put, people deserve people, even when a robot would behave in ways indistinguishable from a human.

One way to reply to this is to consider what it is about people that people deserve. One reasonable approach is to build on the idea that people have the capacity to feel the emotions they display and that they understand. In philosophical terms, humans have (or are) minds and the robots in question do not. They merely create the illusion of having a mind.

Philosophers (and psychologists) have long dealt with the problem of other minds. The problem is an epistemic one: how does one know if another being has a mind (thoughts, feelings, beliefs and such)? Some thinkers (which is surely the wrong term given their view) claimed that there is no mind, just observable behavior. Very roughly put, being in pain is not a mental state, but a matter of expressed behavior (pain behavior).

The usual “solution” to the problem is to embrace what seems obvious: I think other people have minds by an argument from analogy. I am aware of my own mental states and behavior, and I engage in analogical reasoning to infer that those who act as I do have similar mental states. For example, I know how I react when I am in pain, so when I see similar behavior in others, I infer that they are also in pain.

I cannot, unlike some politicians, feel the pain of others. I can merely make an inference from their observed behavior. Because of this, there is the problem of deception: a person can engage in various forms of deceit. For example, a person can fake being in pain or make a claim about being in love that is untrue. Piercing these deceptions can sometimes be difficult since humans can be skilled deceivers. However, it is still (generally) believed that even a deceitful human is still thinking and feeling, albeit not in the way they want people to believe they are thinking and feeling.

In contrast, a companion robot is not thinking or feeling what it purports to be displayed by its behavior, because it does not think or feel. Or so it is believed. A reason that we think robots do not think or feel is because we can examine the robot and not see any emotions or thought in there. The robot, however complicated, is just a material machine and taken as incapable of thought or feeling.

Long before robots, there were thinkers who claimed that we humans  are purely material beings and that a suitable understanding of our mechanical workings would reveal that emotions and thoughts are mechanical states of the nervous system. As science progressed, the explanations of the mechanisms became more complex, but the basic idea remained. Put in modern terms, the idea is that eventually we will be able to see the “code” that composes our thoughts and emotions and understand the hardware it “runs” on.  

Should this goal be achieved, it would seem that humans and suitably complex robots would be on par as both would engage in complex behavior because of their hardware and software. As such, there would be no grounds for claiming that such a robot is engaged in deceit or that humans are genuine. The difference would merely be that humans are organic machines and robots are not.

It can, and has, been argued that there is more to a human person than the material body, that there is a mind that cannot be instantiated in a mere machine. The challenge is a very old one: proving that there is such a thing as the mind. If this can be established and it can be shown that robots cannot have such a mind, then robot companions would always be a deceit.

However, they might still be a useful deceit.  Going back to the placebo analogy, it might not matter whether the robot really thinks or feels. It might suffice that the person thinks it does, and this will yield all the benefits of having a human companion.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>