An obvious consequence of technological advance is the automation of certain jobs. In the past, these jobs tended to be mechanical and repetitive: the sort of tasks that could be reduced to basic rules. A good example of this is the replacement of automobile assembly line jobs with robots. Not surprisingly, it has been claimed that certain jobs will always require humans because these jobs simply cannot be automated. Also not surprisingly, the number of jobs that “simply cannot be automated” shrinks with each advance in technology.
Whether or not there are jobs that simply cannot be automated depend on the limits of technology. But these limits keep expanding and past predictions can turn out to be wrong. For example, the early attempts to create software that would grade college level papers were not very good. But as this is being written, my university sees using AI in this role (with due caution and supervision) as a good idea. Cynical professors suspect the goal is to replace faculty with AI.
One day, perhaps, the pinnacle of automation will be reached: a single financial computer that runs an entire virtual economy within itself and is the richest being on the planet. But that is the stuff of science fiction, at least for now.
Whether or not a job can be automated also depends on what is considered acceptable performance in the job. In some cases, a machine might not do the job as well as a human or it might do the job in a different way that is less desirable. However, there could be reasonable grounds for accepting a lesser quality or difference. For example, machine made items usually lack the individuality of human crafted items, but the gain in lowered costs and increased productivity is seen as well worth it by most people. Going back to teaching, AI might be inferior to a good human teacher, but the economy, efficiency and consistency of the AI could make it worth using from an economic standpoint. One could even make the argument that such AI educators would make education more available to people.
There might, however, be cases in which a machine could do certain aspects of the job adequately yet still be rejected because it does not think and feel as a human does. Areas in which this is a matter of concern include those of caregiving and companionship.
As discussed in an earlier essay, advances in robotics and software will make caregiving and companion robots viable soon (and some would argue that this is already the case). While there are the obvious technical concerns regarding job performance (will the robot be able to handle a medical emergency, will the robot be able to comfort a crying child, and so on), there is also the more abstract concern about whether or not such machines need to be able to think and feel like a human or merely be able to perform their tasks.
An argument against having machine caregivers and companions is one I considered in the previous essay, namely a moral argument that people deserve people. For example, an elderly person deserves a real person to care for her and understand her stories. As another example, a child deserves someone who really loves her. There is clearly nothing wrong with wanting caregivers and companions to really feel and care. However, there is the question of whether this is necessary for these jobs.
One way to look at it is to compare the current paid human professionals who perform caregiving and companion tasks. These would include people working in elder care facilities, nannies, escorts, baby-sitters, and so on. Ideally, of course, people would like to think that the person caring for their aged mother or their child really does care for the mother or child. Perhaps people who hire escorts would also like to think that the escort is not entirely in it for the money but has real feelings for them.
On the one hand, it could be argued that caregivers and companions who do really care and feel genuine emotional attachments do a better job and that this connection is something that people deserve. On the other hand, what is expected of paid professionals is that they complete their tasks: making sure that mom gets her meds on time, that junior is in bed on time, and that the “adult tasks” are properly “performed.” Like an actor that can perform a role without feeling the emotions portrayed, a professional could do the job without caring about the people they are serving. That is, a caregiver need not actually care as they just need to perform their tasks.
While it could be argued that a lack of feeling would show in their performance, this need not be the case. A professional merely needs to be committed to doing the job well. That is, one needs to only care about the tasks, regardless of what one feels about the person. A person could also care a great deal about who she is caring for yet be awful at the job.
If machines cannot care, this would not seem to disqualify them from caregiving (or being escorts). As with a human caregiver (or escort), it is the performance of the tasks that matters, not the emotions of the caregiver. This nicely matches the actor analogy: acting awards are given for the outward performance, not the inward emotional states. And, as many have argued since Plato’s Ion, an actor need not feel any of the emotions they are performing, they just need to create a believable appearance that they are feeling.
As such, an inability to care would not be a disqualification for a caregiving (or escort) job whether it is a robot or human. Provided that the human or machine could perform the observable tasks, his, her or its internal life (or lack thereof) is irrelevant.
