Philosophers have long speculated about autonomy and agency, but the development of autonomous systems has made such speculation even more important. Keeping things simple, an autonomous system is capable of operating independent of direct human control. Autonomy comes in degrees of independence and complexity. It is the capacity for independent operation that distinguishes autonomous systems from those controlled externally.
Toys provide useful examples of this distinction. A wind-up mouse toy has some autonomy: once wound up and released, it can operate on its own until it runs down. A puppet, in contrast, has no autonomy as a puppeteer must control it.
Robots provide examples of more complex autonomous systems. Google’s driverless car is an example of an advanced autonomous machine. Once programmed and deployed, it might be able to drive itself to its destination. A normal car isa non-autonomous system as the driver controls it directly. Some machines allow both autonomous and non-autonomous operation. For example, there are drones that follow a program guiding them to a target and then an operator can take direct control.
Autonomy, at least in this context, is distinct from agency. Autonomy is the capacity to operate (in some degree) independently of direct control. Agency, at least in this context, is the capacity to be morally responsible for one’s actions. There is a connection between autonomy and moral agency as moral agency requires autonomy. After all, an entity whose actions are completely controlled externally would not be responsible for what it was made to do. For example, a puppet is not accountable for what the puppeteer makes it do. Likewise for remote controlled drones used to assassinate people.
While autonomy is necessary for agency, it is not sufficient. While all agents have some autonomy, not all autonomous entities are moral agents. A wind-up toy has a degree of autonomy but has no agency. A modern robot drone following a pre-programed flight-plan has a degree of autonomy but lacks agency. If it collided with a plane, it would not be morally responsible. The usual reason why such a machine would not be an agent is that it lacks the capacity to decide. Or put another way, it lacks freedom. Since it cannot do otherwise, it is no more morally accountable than an earthquake or a super nova.
One obvious problem with basing agency on freedom (especially metaphysical free will) is that there is endless debate over this subject. There is also the epistemic problem of how one would know if an entity had such freedom and free will seems epistemically indistinguishable from a lack of free will.
As a practical matter, it is often just assumed people have the freedom needed to be agents. Kant famously took this approach. What he saw as the best science of his day indicated a deterministic universe devoid of metaphysical freedom. However, he contended that such freedom was needed for morality, so it should be accepted for this reason.
While humans are willing (generally) to attribute freedom and agency to other humans, there are good reasons to not attribute freedom and agency to autonomous machines even those that might be as complex as (or even more complex than) a human. The usual line of reasoning is that since such machines would be built and programmed by humans, they would do what they do because they are what they were made to be. This is in contrast to the agency of humans: humans, it is alleged, do what they do because they choose to do what they do.
This distinction between humans and suitably complex machines seems a mere prejudice favoring organic machines over mechanical machines. If a human was in a convincing robot costume and credibly presented as a robot while acting like a normal human, people would be inclined to deny that “it” had freedom and agency. If a robot was made to look and act just like a human, people would be inclined to grant it agency, at least until they learned it was “just” a machine. Then there would probably be an inclination to regard it as a very clever but unfree machine. An excellent fictional example of this is Harlan Ellison’s Demon With a Glass Hand.
But it would not be known whether the human or the machine had the freedom alleged needed for agency. Fortunately, it is possible to have agency even without free will (but with a form of freedom). The German philosopher Leibniz held the view that what each person will do is pre-established by their inner nature. On the face of it, this seems to entail there is no freedom: each person does what they do because of what they are—and they cannot do otherwise. Interestingly, Leibniz takes the view that people are free. However, he does not accept a commonly held view that freedom requires actions that are unpredictable and spontaneous. Leibniz rejects this view in favor of the position that freedom is unimpeded self-development.
For Leibniz, being metaphysically without freedom would involve being controlled from the outside, like a puppet controlled by a puppeteer or a vehicle operated by remote control. In contrast, freedom is acting from one’s values and character. This is what Leibniz and Taoists call “inner nature.” If a person is acting from this inner nature and not external coercion so that the action is the result of character, then that is all that can be meant by freedom. This view, which attempts to blend determinism and freedom, is known as compatibilism. On this view, humans have agency because they have the required degree of freedom and autonomy.
If this model works for humans, it could apply to autonomous machines. To the degree that a machine is operating in accord to its “inner nature” and is not operating under the control of outside factors, it would have agency.
An obvious objection is that an autonomous machine, however complex, would have been built and programmed (in the broad sense of the term) by humans. As such, it would be controlled and not free. The easy and obvious reply is that humans are “built” by other humans (by mating) and are “programmed” by humans via education and socialization. As such, if humans can be moral agents, then a machine could also be a moral agent.
From a moral standpoint, I would suggest a Moral Descartes’ Test (or a Moral Turing Test). Descartes argued that the sure proof of having a mind is a capacity to use true language. Turing later proposed a similar test involving the ability of a computer to pass as human via text communication. In the moral test, the test would be a judgment of moral agency: can the machine be as convincing as a human in its possession of agency? Naturally, a suitable means of concealing the fact that the being is a machine would be needed to prevent prejudice from affecting the judgment. The movie Blade Runner featured something similar, the Voight-Kampff test aimed at determining if the subject was a replicant or human. This test was based on the differences between humans and replicants in regard to emotions. In the case of moral agency, the test would have to be crafted to determine agency rather than to distinguish a human from machine, since the issue is not whether a machine is human but whether it has agency. A non-human moral agent might differ greatly from a human, and it should not be assumed that an agent must be human, and non-humans cannot be moral agents. The challenge is developing a test for moral agency. It would be interesting if humans could not pass it.
