
Dr. Frauke Zeller and Dr. David Smith created HitchBOT (essentially a solar powered iPhone in an anthropomorphic shell) and sent him on trip to explore the USA on July 17, 2015. HitchBOT had successfully journey across Canada and Germany. The experiment was aimed at seeing how humans would interact with the “robot.” He lasted about two weeks in the United States, meeting his violent end in Philadelphia.
The experiment was innovative and raised questions about what the fate of HitchBOT says about us. We do, of course, already know a great deal about us: we do awful things to each other, so it is hardly surprising that someone would do something awful to HitchBOT. People are killed every day in the United States, vandalism occurs regularly, and the theft of technology is routine. Thus it is no surprise that HitchBOT came to a bad end in the United States. In some ways, it was impressive that he made it as far as he did.
While HitchBOT met his untimely doom at the hands of someone awful, it is also worth remembering how well HitchBOT was treated. After all, he was essentially an iPhone in a shell that was being transported by random people.
One reason t HitchBOT was well treated because it fit into the travelling gnome tradition. For those not familiar with the travelling gnome prank, it involves “stealing” a lawn gnome and then sending the owner photographs of the gnome from various places. The gnome is then returned (at least by nice pranksters). HitchBOT was an elaborate version of the traveling gnome and, obviously, differs from the classic travelling gnome in that the owners sent HitchBOT on his fatal adventure. People, perhaps, responded negatively to the destruction of HitchBOT because it broke the rules of the travelling gnome game as the gnome is supposed to roam and make its way safely back home.
A second reason for HitchBOT’s positive adventures (and perhaps also his negative adventure) is that he became a minor internet celebrity. Since celebrity status, like moth dust, can rub off onto those who have close contact it is not surprising that people wanted to spend time with HitchBOT and post photos and videos of their adventures with the iPhone in a trash can. On the dark side, destroying something like HitchBOT could also be a way to gain some fame.
A third reason, which is more debatable, is that HitchBOT had a human shape, a cute name and a non-threatening appearance. These inclined people to react positively. Natural selection has probably favored humans that are generally friendly to other humans, and this presumably extends to things that resemble humans. There is probably also some biological hardwiring for liking cute things, which causes humans to generally like things like young creatures and cute stuffed animals. HitchBOT was also given a social media personality by those conducting the experiment which probably influenced people into feeling that it had a personality of its own. Seeing a busted up HitchBOT, which has an anthropomorphic form, presumably triggers a response similar too (but rather weaker than) what a sane human would have to seeing the busted up remains of a fellow human.
While some people were upset by the destruction of HitchBOT, others claimed that it was literally “a pile of trash that got what it deserved.” A more moderate position is that while it was unfortunate that HitchBOT was busted up, it is unreasonable to be overly concerned by this vandalism because HitchBOT was just an iPhone in a cheap shell. While it is fine to condemn the destruction as vandalism, theft and the wrecking of a fun experiment, it was unreasonable to see it as being important. After all, there were and always are more horrible things to be concerned about, such as the regular murder of humans.
My view is that the moderate position is reasonable: it is too bad HitchBOT was vandalized, but it was just an iPhone in a shell. As such, its destruction was not a matter of great concern. That said, the way HitchBOT was treated is still morally significant. In support of this, I turn to what has become my stock argument about the ethics of treating entities that lack their moral status. This argument is stolen from Kant and is a modification of his argument regarding the treatment of animals.
Kant argues that we should treat animals well despite his view that animals have the same moral status as objects. Here is how he does it.
While Kant is not willing to accept that we have any direct duties to animals, he “smuggles” in duties to them indirectly. As he puts it, our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing X obligates us to that human, then an animal doing X would also create an analogous moral obligation. For example, a human who has long and faithfully served another person should not simply be abandoned or put to death when he has grown old. Likewise, a dog who has served faithfully and well should not be cast aside in their old age.
While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (that is, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to abandon or shoot the dog?
Kant’s answer seems consequentialist in character: he argues that if a person acts in inhumane ways towards animals (shooting the dog, for example) then his humanity will likely be damaged. Since, as Kant sees it, humans do have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act.
Kant discusses how people develop cruelty: they often begin with animals and then work up to harming human beings. As I point out to my students when I teach his theory, Kant seems to have anticipated the psychological devolution of serial killers.
Kant goes beyond merely enjoining us to not be cruel to animals and encourages us to be kind to them. He even praises Leibniz for being gentle with a worm he found on a leaf. Of course, he encourages this because those who are kind to animals will develop more humane feelings towards humans. So, roughly put, animals are moral practice for us: how we treat them is training us for how we will treat human beings.
Being an iPhone in a cheap shell, HitchBOT obviously had the moral status of an object and not that of a person. He did not feel or think, and the positive feelings people had towards it were due to its appearance (cute and vaguely human) and the way those running the experiment served as its personality via social media. It was, in many ways, a virtual person—or at least the manufactured illusion of a person.
Given the manufactured pseudo-personhood of HitchBOT, it could be taken as comparable to an animal, at least in Kant’s view. After all, for him animals are mere objects and have no moral status of their own. Of course, the same is also true of sticks and stones. Yet Kant would never argue that we should treat stones well. Thus, a key matter to settle is whether HitchBOT was more like an animal or more like a stone.
If Kant’s argument has merit, then the key concern about the treatment of non-rational beings is how it affects the behavior of the person engaging in the behavior. So, for example, if being cruel to a real dog could damage a person’s humanity, then he should (as Kant sees it) not be cruel to the dog. This should also be extended to HitchBOT. For example, if engaging in certain activities with HitchBOT would damage a person’s humanity, then he should not act in that way. If engaging in certain behavior with HitchBOT would make a person more inclined to be kind to other rational beings, then the person should engage in that behavior.
It makes intuitive sense that being “nice” to the HitchBOT would help incline people to be somewhat nicer to others (much along the lines of how children are encouraged to play nicely with their stuffed animals). It also makes intuitive sense that being “mean” to HitchBOT would incline people to be somewhat less nice to others. Naturally, people would also tend to respond to HitchBOT based on whether they already tend to be nice or not. As such, it is reasonable to praise nice behavior towards HitchBOT and condemn bad behavior—after all, it was a surrogate for a person. But, obviously, not a person.
While HitchBOT presented a physical virtual person, current AI is presenting digital virtual people, albeit vastly more complex than HitchBOT. However, the lessons of HitchBOT should apply to AI as well.

Donald gazed down upon the gleaming city of Newer York and the equally gleaming citizens that walked, rolled, or flew its gleaming streets. Long ago, or so the oldest files in his memory indicated, he had been an organic human. That human, whom Donald regarded as himself, had also gazed down upon the city, then known as New York. In those dark days, primates walked and drove the dirty streets and the only things that gleamed were puddles of urine.
His treads ripping into the living earth, Striker 115 rushed to engage the human operated tanks. The few remaining human soldiers had foolishly, yet bravely (as Striker 115 was forced to admit) refused to accept quick and painless processing.
In philosophy, a classic moral debate is on the conflict between liberty and security. While this covers many issues, the main problem is determining the extent to which liberty should be sacrificed to gain security. There is also the practical question of whether the security gain is effective.
An obvious consequence of technological advance is the automation of certain jobs. In the past, these jobs tended to be mechanical and repetitive: the sort of tasks that could be reduced to basic rules. A good example of this is the replacement of automobile assembly line jobs with robots. Not surprisingly, it has been claimed that certain jobs will always require humans because these jobs simply cannot be automated. Also not surprisingly, the number of jobs that “simply cannot be automated” shrinks with each advance in technology.
In his book Naked Sun, Isaac Asimov creates the world of Solaria. What distinguishes this world from other human inhabited planets is that it has a strictly regulated population of 20,000 humans and 10,000 robots for each human. What is perhaps the strangest feature of this world is a reversal of what many consider a basic human need: the humans of Solaria are trained to despise in-person contact with other humans, though interaction with human-like robots is acceptable. Each human lives on a huge estate, though some live “with” a spouse. When the Solarians need to communicate, they make use of a holographic telepresence system. Interestingly, they have even developed terminology to distinguish between communicating in person (called “seeing”) and communication via telepresence (“viewing”). For some Solarians the fear of encountering another human in person is so strong that they would rather commit suicide than endure such contact.
Thanks to improvements in medicine humans are living longer and can be kept alive beyond when they would naturally die. On the plus side, longer life is generally good. On the downside, this longer lifespan and medical intervention mean that people will often need extensive care in their old age that can be a burden on caregivers. Not surprisingly, there has been an effort to solve this problem with companion robots.
My friend Ron claims that I do not drive. This is not true. I drive. But I dive as little as possible. Part of it is me being frugal. I don’t want to spend more than I need on gas and maintenance. But most of it is that I hate to drive. Some of this is driving time is mostly wasted time and I would rather be doing something else. Some of it is that I find driving an awful blend of boredom and stress. The stress is because driving creates a risk of harming other people and causing property damage, so I am as hypervigilant driving as I am when target shooting at the range. If I am distracted or act rashly, I could kill someone by accident. Or they could kill me. As such, I am completely in favor of effective driverless cars. That said, it is certainly worth considering the implications of their widespread adoption. The first version of this essay appeared back in 2015 and certain people have been promising that driverless cars are just around the corner. The corner remains far away.
While Aristotle was writing centuries before wearables, his view of moral education provides a foundation for the theory behind the benign tyranny of the device. Or, if one prefers, the bearable tyranny of the wearable.
“The unquantified life is not worth living.”