One obvious consequence of technological advance is the automation of jobs. In the past, these jobs tended to be mechanical and repetitive: the sort of tasks that could be reduced to basic rules. A good example of this is the replacement of many jobs on the automobile assembly line with robots. Not surprisingly, it has been claimed that certain jobs will always require humans because these jobs simply cannot be automated. Also not surprisingly, the number of jobs that “simply cannot be automated” shrinks with each advance in technology.
Whether or not there are jobs that simply cannot be automated does depend on the limits of technology and engineering. That, is whether or not a job can be automated depends on what sort of hardware and software that is possible to create. As an illustration, while there have been numerous attempts to create grading software that can properly evaluate and give meaningful feedback on college level papers, these do not yet seem ready for prime time. However, there seems to be no a priori reason as to why such software could not be created. As such, perhaps one day the administrator’s dream will come true: a university consisting only of highly paid administrators and customers (formerly known as students) who are trained and graded by software. One day, perhaps, the ultimate ideal will be reached: a single financial computer that runs an entire virtual economy within itself and is the richest being on the planet. But that is the stuff of science fiction, at least for now.
Whether or not a job can be automated also depends on what is considered acceptable performance in the job. In some cases, a machine might not do the job as well as a human or it might do the job in a different way that is seen as somewhat less desirable. However, there could be reasonable grounds for accepting a lesser quality or difference. For example, machine made items generally lack the individuality of human crafted items, but the gain in lowered costs and increased productivity are regarded as more than offsetting these concerns. Going back to the teaching example, a software educator and grader might be somewhat inferior to a good human teacher and grader, but the economy, efficiency and consistency of the robo-professor could make it well worthwhile.
There might, however, be cases in which a machine could do the job adequately in terms of completing specific tasks and meeting certain objectives, yet still be regarded as problematic because the machines do not think and feel as a human does. Areas in which this is a matter of concern include those of caregiving and companionship.
As discussed in an earlier essay, advances in robotics and software will make caregiving and companion robots viable soon (and some would argue that this is already the case). While there are the obvious technical concerns regarding job performance (will the robot be able to handle a medical emergency, will the robot be able to comfort a crying child, and so on), there is also the more abstract concern about whether or not such machines need to be able to think and feel like a human—or merely be able to perform their tasks.
An argument against having machine caregivers and companions is one I considered in an earlier essay, namely a moral argument that people deserve people. For example, that an elderly person deserves a real person to care for her and understand her stories. As another example, that a child deserves a nanny that really loves her. There is clearly nothing wrong with wanting caregivers and companions to really feel and care. However, there is the question of whether or not this is really necessary for the job.
One way to look at it is to compare the current paid human professionals who perform caregiving and companion tasks. These would include people working in elder care facilities, nannies, escorts, baby-sitters, and so on. Ideally, of course, people would like to think that the person caring for their aged mother or their child really does care for the mother or child. Perhaps people who hire escorts would also like to think that the escort is not entirely in it for the money, but has real feelings for the person.
On the one hand, it could be argued that caregivers and companions who do really care and feel genuine emotional attachments do a better job and that this connection is something that people do deserve. On the other hand, what is expected of paid professionals is that the complete the observable tasks—making sure that mom gets her meds on time, that junior is in bed on time, and that the “adult tasks” are properly “performed.” Like an actor that can excellently perform a role without actually feeling the emotions portrayed, a professional could presumably do the job very well without actually caring about the people they care for or escort. That is, a caregiver need not actually care—she just needs to perform the task.
While it could be argued that a lack of caring about the person would show in the performance of the task, this need not be the case. A professional merely needs to be committed to doing the job well—that is, one needs to care about the tasks, regardless of what one feels about the person. A person could also care a great deal about who she is caring for, yet be awful at the job.
Assuming that machines cannot care, this would not seem to disqualify them from caregiving (or being escorts). As with a human caregiver (or escort), it is the performance of the tasks that matters, not what is going on in regards to the emotions of the caregiver. This nicely matches the actor analogy: acting awards are given for the outward performance, not the inward emotional states. And, as many have argued since Plato’s Ion, an actor need not feel any of the emotions he is performing—he just needs to create a believable appearance that he is feeling what he is showing.
As such, an inability to care would not be a disqualification for a caregiving (or escort) job—whether it is a robot or human. Provided that the human or machine could perform the observable tasks, his, her or its internal life (or lack thereof) is irrelevant.
Comments