Philosophers have long speculated about autonomy and agency, but the development of autonomous systems has made such speculation even more important. Keeping things simple, an autonomous system is capable of operating independent of direct human control. Autonomy comes in degrees of independence and complexity. It is the capacity for independent operation that distinguishes autonomous systems from those controlled externally.
Toys provide useful examples of this distinction. A wind-up mouse toy has some autonomy: once wound up and released, it can operate on its own until it runs down. A puppet, in contrast, has no autonomy as a puppeteer must control it.
Robots provide examples of more complex autonomous systems. Google’s driverless car is an example of an advanced autonomous machine. Once programmed and deployed, it might be able to drive itself to its destination. A normal car isa non-autonomous system as the driver controls it directly. Some machines allow both autonomous and non-autonomous operation. For example, there are drones that follow a program guiding them to a target and then an operator can take direct control.
Autonomy, at least in this context, is distinct from agency. Autonomy is the capacity to operate (in some degree) independently of direct control. Agency, at least in this context, is the capacity to be morally responsible for one’s actions. There is a connection between autonomy and moral agency as moral agency requires autonomy. After all, an entity whose actions are completely controlled externally would not be responsible for what it was made to do. For example, a puppet is not accountable for what the puppeteer makes it do. Likewise for remote controlled drones used to assassinate people.
While autonomy is necessary for agency, it is not sufficient. While all agents have some autonomy, not all autonomous entities are moral agents. A wind-up toy has a degree of autonomy but has no agency. A modern robot drone following a pre-programed flight-plan has a degree of autonomy but lacks agency. If it collided with a plane, it would not be morally responsible. The usual reason why such a machine would not be an agent is that it lacks the capacity to decide. Or put another way, it lacks freedom. Since it cannot do otherwise, it is no more morally accountable than an earthquake or a super nova.
One obvious problem with basing agency on freedom (especially metaphysical free will) is that there is endless debate over this subject. There is also the epistemic problem of how one would know if an entity had such freedom and free will seems epistemically indistinguishable from a lack of free will.
As a practical matter, it is often just assumed people have the freedom needed to be agents. Kant famously took this approach. What he saw as the best science of his day indicated a deterministic universe devoid of metaphysical freedom. However, he contended that such freedom was needed for morality, so it should be accepted for this reason.
While humans are willing (generally) to attribute freedom and agency to other humans, there are good reasons to not attribute freedom and agency to autonomous machines even those that might be as complex as (or even more complex than) a human. The usual line of reasoning is that since such machines would be built and programmed by humans, they would do what they do because they are what they were made to be. This is in contrast to the agency of humans: humans, it is alleged, do what they do because they choose to do what they do.
This distinction between humans and suitably complex machines seems a mere prejudice favoring organic machines over mechanical machines. If a human was in a convincing robot costume and credibly presented as a robot while acting like a normal human, people would be inclined to deny that “it” had freedom and agency. If a robot was made to look and act just like a human, people would be inclined to grant it agency, at least until they learned it was “just” a machine. Then there would probably be an inclination to regard it as a very clever but unfree machine. An excellent fictional example of this is Harlan Ellison’s Demon With a Glass Hand.
But it would not be known whether the human or the machine had the freedom alleged needed for agency. Fortunately, it is possible to have agency even without free will (but with a form of freedom). The German philosopher Leibniz held the view that what each person will do is pre-established by their inner nature. On the face of it, this seems to entail there is no freedom: each person does what they do because of what they are—and they cannot do otherwise. Interestingly, Leibniz takes the view that people are free. However, he does not accept a commonly held view that freedom requires actions that are unpredictable and spontaneous. Leibniz rejects this view in favor of the position that freedom is unimpeded self-development.
For Leibniz, being metaphysically without freedom would involve being controlled from the outside, like a puppet controlled by a puppeteer or a vehicle operated by remote control. In contrast, freedom is acting from one’s values and character. This is what Leibniz and Taoists call “inner nature.” If a person is acting from this inner nature and not external coercion so that the action is the result of character, then that is all that can be meant by freedom. This view, which attempts to blend determinism and freedom, is known as compatibilism. On this view, humans have agency because they have the required degree of freedom and autonomy.
If this model works for humans, it could apply to autonomous machines. To the degree that a machine is operating in accord to its “inner nature” and is not operating under the control of outside factors, it would have agency.
An obvious objection is that an autonomous machine, however complex, would have been built and programmed (in the broad sense of the term) by humans. As such, it would be controlled and not free. The easy and obvious reply is that humans are “built” by other humans (by mating) and are “programmed” by humans via education and socialization. As such, if humans can be moral agents, then a machine could also be a moral agent.
From a moral standpoint, I would suggest a Moral Descartes’ Test (or a Moral Turing Test). Descartes argued that the sure proof of having a mind is a capacity to use true language. Turing later proposed a similar test involving the ability of a computer to pass as human via text communication. In the moral test, the test would be a judgment of moral agency: can the machine be as convincing as a human in its possession of agency? Naturally, a suitable means of concealing the fact that the being is a machine would be needed to prevent prejudice from affecting the judgment. The movie Blade Runner featured something similar, the Voight-Kampff test aimed at determining if the subject was a replicant or human. This test was based on the differences between humans and replicants in regard to emotions. In the case of moral agency, the test would have to be crafted to determine agency rather than to distinguish a human from machine, since the issue is not whether a machine is human but whether it has agency. A non-human moral agent might differ greatly from a human, and it should not be assumed that an agent must be human, and non-humans cannot be moral agents. The challenge is developing a test for moral agency. It would be interesting if humans could not pass it.

Back when ISIS was a major threat, 
The bookshelves of the world abound with self-help tomes. Many profess to help with emotional woes, such as sadness, and make vague promises about happiness. Philosophers have long been in the business of offering advice on how to be happy. Or at least how not to be too sad.
It is July 16, 2214. I am at Popham Beach in what I still think of as Maine. I am standing in the sand, watching the waves strike the shore. Sand pipers run in the surf, looking for lunch. I have a two-hundred-year-old memory of another visit to this beach. In that memory, the water is cold on the skin and there is a mild ache in the left knee, a relic of a quadriceps tendon repair. Today there is no ache. What serves as my knee is a biomechanical system free of all aches and pains. I can, if I wish, feel the cold by adjusting my sensors. I do so, and what was once data about temperature becomes a feeling in what I still call my mind. I downgrade my vision to that of a human, then tweak it so it perfectly matches the imperfect eyesight of the memory. I do the same for my hearing and turn off my other sensors until I am, as far as I can tell, merely human. I walk into the water, enjoying the feeling of the cold. My companion asks me if I have ever been here before. I pause and consider this question. I have a memory from a man who was here in 2014. But I do not know if I am him or if I am but a child of his memories. But it is a lovely day…too lovely for metaphysics. I say “yes, long ago”, and wait patiently for the setting of the sun.
When I was young, I had my first out of body experience (OBE for short). While I did not know about them at the time, I later learned that my experience matched the usual description: I felt as if the center of my awareness and perception had left my body. It seemed as if I could perceive from this out-of-body location, albeit with greater vividness (retrospectively, it seemed like high definition). After that, I had OBEs from time to time, especially when I was under great stress, such as in graduate school.
The classic problem of the external world presents an epistemic challenge forged by the skeptics: how do I know that what I seem to be experiencing as the external world is really real for real? Early skeptics claimed that what seems real might be a dream. Descartes upgraded the problem through his evil demon which used its powers to befuddle its victim. As technology progressed, philosophers presented the brain-in-a-vat scenarios and then moved on to more impressive virtual reality scenarios. One recent variation on this problem was made famous by
While the classic werewolf is a human with the ability to shift into the shape of a wolf, movies usually show a transformation into a wolf-human hybrid. The standard werewolf has a taste for human flesh, a vulnerability to silver and a serious shedding problem. Some werewolves have impressive basketball skills, but that is not a standard werewolf ability.
On an episode of the Late Show, host Stephen Colbert and Jane Lynch had an interesting discussion of guardian angels. Lynch, who starred as a guardian angel in “Angel from Hell”, related a story of how her guardian angel held her in a protective embrace during a low point of her life. Colbert, ever the rational Catholic, noted that he believed in guardian angels despite knowing they do not exist. The question of the existence of guardian angels is yet another way to consider the classic problem of evil.
In my previous essay I introduced the notion of using the notion of essential properties to address the question of whether James Bond must be a white man. I ran through this rather quickly and want to expand on it here.