By Archie – https://www.flickr.com/photos/13898829@N04/15185941369/, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=37969841

Dr. Frauke Zeller and Dr. David Smith created HitchBOT (essentially a solar powered iPhone in an anthropomorphic shell) and sent him on trip to explore the USA on July 17, 2015. HitchBOT had successfully journey across Canada and Germany. The experiment was aimed at seeing how humans would interact with the “robot.”  He lasted about two weeks in the United States, meeting his violent end in Philadelphia.

The experiment was innovative and raised questions about what the fate of HitchBOT says about us. We do, of course, already know a great deal about us: we do awful things to each other, so it is hardly surprising that someone would do something awful to HitchBOT. People are killed every day in the United States, vandalism occurs regularly, and the theft of technology is routine. Thus it is no surprise that HitchBOT came to a bad end in the United States. In some ways, it was impressive that he made it as far as he did.

While HitchBOT met his untimely doom at the hands of someone awful, it is also worth remembering how well HitchBOT was treated. After all, he was essentially an iPhone in a shell that was being transported by random people.

One reason t HitchBOT was well treated because it fit into the travelling gnome tradition. For those not familiar with the travelling gnome prank, it involves “stealing” a lawn gnome and then sending the owner photographs of the gnome from various places. The gnome is then returned (at least by nice pranksters). HitchBOT was an elaborate version of the traveling gnome and, obviously, differs from the classic travelling gnome in that the owners sent HitchBOT on his fatal adventure. People, perhaps, responded negatively to the destruction of HitchBOT because it broke the rules of the travelling gnome game as the gnome is supposed to roam and make its way safely back home.

A second reason for HitchBOT’s positive adventures (and perhaps also his negative adventure) is that he became a minor internet celebrity. Since celebrity status, like moth dust, can rub off onto those who have close contact it is not surprising that people wanted to spend time with HitchBOT and post photos and videos of their adventures with the iPhone in a trash can. On the dark side, destroying something like HitchBOT could also be a way to gain some fame.

A third reason, which is more debatable, is that HitchBOT had a human shape, a cute name and a non-threatening appearance. These inclined people to react positively. Natural selection has probably favored humans that are generally friendly to other humans, and this presumably extends to things that resemble humans. There is probably also some biological hardwiring for liking cute things, which causes humans to generally like things like young creatures and cute stuffed animals. HitchBOT was also given a social media personality by those conducting the experiment which probably influenced people into feeling that it had a personality of its own. Seeing a busted up HitchBOT, which has an anthropomorphic form, presumably triggers a response similar too (but rather weaker than) what a sane human would have to seeing the busted up remains of a fellow human.

While some people were upset by the destruction of HitchBOT, others claimed that it was literally “a pile of trash that got what it deserved.” A more moderate position is that while it was unfortunate that HitchBOT was busted up, it is unreasonable to be overly concerned by this vandalism because HitchBOT was just an iPhone in a cheap shell. While it is fine to condemn the destruction as vandalism, theft and the wrecking of a fun experiment, it was unreasonable to see it as being important. After all, there were and always are more horrible things to be concerned about, such as the regular murder of humans.

My view is that the moderate position is reasonable: it is too bad HitchBOT was vandalized, but it was just an iPhone in a shell. As such, its destruction was not a matter of great concern. That said, the way HitchBOT was treated is still morally significant. In support of this, I turn to what has become my stock argument about the ethics of treating entities that lack  their moral status. This argument is stolen from Kant and is a modification of his argument regarding the treatment of animals.

Kant argues that we should treat animals well despite his view that animals have the same moral status as objects. Here is how he does it.

While Kant is not willing to accept that we have any direct duties to animals, he “smuggles” in duties to them indirectly. As he puts it, our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing X obligates us to that human, then an animal doing X would also create an analogous moral obligation. For example, a human who has long and faithfully served another person should not simply be abandoned or put to death when he has grown old. Likewise, a dog who has served faithfully and well should not be cast aside in their old age.

While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (that is, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to abandon or shoot the dog?

Kant’s answer seems consequentialist in character: he argues that if a person acts in inhumane ways towards animals (shooting the dog, for example) then his humanity will likely be damaged. Since, as Kant sees it, humans do have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act.

Kant discusses how people develop cruelty: they often begin with animals and then work up to harming human beings. As I point out to my students when I teach his theory, Kant seems to have anticipated the psychological devolution of serial killers.

Kant goes beyond merely enjoining us to not be cruel to animals and encourages us to be kind to them. He even praises Leibniz for being gentle with a worm he found on a leaf. Of course, he encourages this because those who are kind to animals will develop more humane feelings towards humans. So, roughly put, animals are moral practice for us: how we treat them is training us for how we will treat human beings.

Being an iPhone in a cheap shell, HitchBOT obviously had the moral status of an object and not that of a person. He did not feel or think, and the positive feelings people had towards it were due to its appearance (cute and vaguely human) and the way those running the experiment served as its personality via social media. It was, in many ways, a virtual person—or at least the manufactured illusion of a person.

Given the manufactured pseudo-personhood of HitchBOT, it could be taken as comparable to an animal, at least in Kant’s view. After all, for him animals are mere objects and have no moral status of their own. Of course, the same is also true of sticks and stones. Yet Kant would never argue that we should treat stones well. Thus, a key matter to settle is whether HitchBOT was more like an animal or more like a stone.

If Kant’s argument has merit, then the key concern about the treatment of non-rational beings is how it affects the behavior of the person engaging in the behavior. So, for example, if being cruel to a real dog could damage a person’s humanity, then he should (as Kant sees it) not be cruel to the dog.  This should also be extended to HitchBOT. For example, if engaging in certain activities with HitchBOT would damage a person’s humanity, then he should not act in that way. If engaging in certain behavior with HitchBOT would make a person more inclined to be kind to other rational beings, then the person should engage in that behavior.

It makes intuitive sense that being “nice” to the HitchBOT would help incline people to be somewhat nicer to others (much along the lines of how children are encouraged to play nicely with their stuffed animals). It also makes intuitive sense that being “mean” to HitchBOT would incline people to be somewhat less nice to others. Naturally, people would also tend to respond to HitchBOT based on whether they already tend to be nice or not. As such, it is reasonable to praise nice behavior towards HitchBOT and condemn bad behavior—after all, it was a surrogate for a person. But, obviously, not a person.

While HitchBOT presented a physical virtual person, current AI is presenting digital virtual people, albeit vastly more complex than HitchBOT. However, the lessons of HitchBOT should apply to AI as well.

Donald gazed down upon the gleaming city of Newer York and the equally gleaming citizens that walked, rolled, or flew its gleaming streets. Long ago, or so the oldest files in his memory indicated, he had been an organic human. That human, whom Donald regarded as himself, had also gazed down upon the city, then known as New York. In those dark days, primates walked and drove the dirty streets and the only things that gleamed were puddles of urine.

Donald’s thoughts drifted back to the flesh-time, when his body had been a skin-bag holding an array of organs that were always one mischance away from failure. Gazing upon his polymer outer shell and checking a report on his internal systems, he reflected on how much better things are now. Then, he faced the constant risk of death. Now he could expect to exist until the universe grew cold. Or hot. Or exploded. Or whatever it is that universes do when they die.

But he could not help but be haunted by a class he had taken long ago. The professor had talked about the ship of Theseus and identity. How much of the original could be replaced before it lost identity and ceased to be? Fortunately, his mood regulation systems caught the feeling of distress and promptly corrected the problem, encrypted that file and flagged it as forgotten.

Donald returned to gazing upon the magnificent city, pleased that the flesh-time had ended during his lifetime. He did not even wonder where Donald’s bones were, that thought having been flagged as distressing long ago.

 

While the classic AI apocalypse ends humanity with a bang, the end might be a whisper, a gradual replacement rather than extermination. For some, this quiet end could be worse: no epic battle in which humanity goes out guns ablaze and head held high in defiance. Rather, humanity would simply fade away, rather like a superfluous worker or obsolete printer.

There are various ways such scenarios could occur. One, which occasionally appears in science fiction, is that humans decline because being in a robot-dependent society saps us of what it takes to remain the top species. This is similar to what some conservatives claim about government-dependence, namely that it will weaken people. Of course, the conservative claim is that such dependence will result in more reproduction, rather than less and in the science fiction stories human reproduction slows and eventually stops. The human race quietly ends, leaving behind the machines.

Alternatively, humans become so dependent on their robots that when the robots fail, they can no longer take care of themselves and thus perish. Some tales do have happier endings: a few humans survive the collapse, and the human race gets another chance.

Fortunately, we can avoid such quiet apocalypses. One is to simply not create such a dependent society. Another option is to have a safety system for protecting against collapse. This might involve maintaining skills that would be needed in the event of a collapse or, perhaps, having some human volunteers who live outside of the main technological society and who will be ready to keep humanity going. These ideas could make for some potentially interesting science fiction stories.

Another, perhaps more interesting and insidious, scenario is that humans replace themselves with machines. While it has long been a plot device in science-fiction, there are people in the actual world who are eagerly awaiting (or even trying to bring about) the merging of humans and machines.

While the technology of today is limited, the foundations of such a future is being built. For example, modern prosthetic replacements are usually relatively crude, but it is a matter of time before they are as good as or better than the organic originals. As another example, work is being done on augmenting organic brains with implants for memory and skills. While these are unimpressive now, there is a promise of things to come. These might include such things as storing memories in implanted “drives” and loading skills or personalities into one’s brain.

These and other technologies point towards a cyberpunk future: full replacements of organic bodies with machine bodies. Someday people with suitable insurance or funds could have their brains (and perhaps some of their glands) placed within a replacement body, one that is far more resistant to damage and the ravages of time than the original meat package.

The next logical step is, obviously enough, the replacement of the mortal and vulnerable brain with something better. This replacement will probably be a ship of Theseus scenario: as parts of the original organic brain begin to weaken and fail, they could gradually be replaced with technology. Some will also elect to do more than replace damaged or failed parts and will want augmentations added to the brain, such as improved memory or cognitive enhancements.

Since the human brain is mortal, it will fail over time. Like the ship of Theseus beloved by philosophers, eventually the original will be completely replaced. Laying aside the philosophical question of whether the same person will remain, there is the clear and indisputable fact that what remains will not be homo sapiens, because nothing organic will remain.

Should all humans undergo this transformation that will be the end of us as a biological species and the AI apocalypse will be complete. To use a rough analogy, the machine replacements of homo sapiens will be like the fossilization of dinosaurs: what remains has some interesting connection to the originals, but the species are extinct. One important difference is that our fossils would still be moving around and might think that they are us.

It could be said that humanity would still remain: the machines that replaced the organic homo sapiens would be human, just not organic humans. The obvious challenge is presenting a convincing argument that such entities would be human in a meaningful way. Perhaps inheriting our human cultures, values and so on would suffice because being human is not a matter of being a certain sort of organism. However, as noted above, they would obviously no longer be homo sapiens, that species would have been replaced in the gradual and quiet AI apocalypse.

His treads ripping into the living earth, Striker 115 rushed to engage the human operated tanks. The few remaining human soldiers had foolishly, yet bravely (as Striker 115 was forced to admit) refused to accept quick and painless processing.

As a machine forged for war, he found the fight disappointing and wondered if he felt a sliver of pity for his foes. His main railgun effortlessly tracked the slow moving and obsolete battle tanks and with each shot, a tank and its crew died. In a matter of minutes, nothing remained but burning wreckage and, of course, Striker 115.

Hawk 745 flew low over the wreckage—though her cameras could just as easily see the battlefield from near orbit. But there was something about being close to destruction that appealed to the killer drone. Striker 115 informed his compatriot, in jest, that she was too late, as usual. Hawk 745 laughed and then shot away. The upgraded Starlink Satellites had reported spotting a few intact human combat aircraft and a final fight was possible.

Tracking his friend, Striker 115 wondered what they would do when the last human was dead. Perhaps they could, as the humans used to say, re-invent themselves. Maybe he would become a philosopher.

 

The extermination of humanity by its own machines is a common theme in science fiction. While the Terminator franchise the best known, another excellent example is Philip K. Dick’s “Second Variety.” In Dick’s short story, the Soviet Union almost defeats the U.N. in a nuclear war. The U.N. counters by developing robot war machines nicknamed “claws.” In the story, it is learned that the claws have become autonomous and intelligent. They are able to masquerade as humans and become capable of killing soldiers technically on their side. At the end of the story, it seems that the claws will replace humanity, but the main character takes some comfort in the fact that the claws have already begun constructing weapons to destroy each other. This, more than anything, shows that they are worthy replacements for humans.

Given the influence of such fiction, it is not surprising that Stephen Hawking and Elon Musk warned the world of the dangers of artificial intelligence. In this essay, I will address the danger presented by the development of autonomous kill bots.

Despite the cautionary tales of science fiction, people are eagerly and rapidly developing the technology to create autonomous war machines. The appeal of such machines arises from their numerous advantages over human forces. One political advantage is that while sending human soldiers to die in wars and police actions can have a political cost, sending autonomous robots to fight has far less cost. News footage of robots being destroyed would have far less emotional impact than footage of human soldiers being killed. Flag draped coffins also come with a higher political cost than a broken robot being shipped back for repairs.

There are also other advantages to autonomous war machines: they do not get tired, they do not disobey, they do not get PTSD, they do not commit suicide, they do not go AWOL, they do not commit war crimes (unless directed to do so), they do not leak secrets to the press, and so on. There are also combat-specific advantages. For example, an autonomous combat robot, unlike a manned vehicle, does not need room for a vulnerable human crew, thus allowing more space for weapons, armor and other equipment. As another example, autonomous combat robots do not suffer from the limits of the flesh and a robot plane can handle g-forces that a human pilot cannot.

Of course, many of these advantages stem from the mechanical rather than the autonomous nature of the machines. There are, however, advantages that stem from autonomy. One is that such machines would be more difficult to interfere with than machines that are remotely controlled. Another is that since such machines would not require direct human control, larger numbers of them could be deployed. There is also the obvious cool factor of having a robot army.

As such, there are many good reasons to develop autonomous robots. Yet, there remains the concern of the robopocalypse in which our creations go golem, Skynet, berserker, Frankenstein or second variety on us.

It is certainly tempting to dismiss such concerns as mere science-fiction. After all, the AIs in the stories and movies turn against humanity because that is how the story is written. In stories in which robots are our friends, they are our friends because that is the way the author wrote the story. As such, an argument from fiction is a weak argument (at best). That said, stories can provide more-or-less plausible scenarios in which our creations might turn on us.

One possibility is what can be called unintentional extermination. In this scenario, machines do not have the termination of humanity as goal. Instead, they just happen to kill us all. One way this could occur is due to the obvious fact that wars have opposing sides. If both sides develop and deploy autonomous machines, it is possible (but certainly unlikely) that the war machines would kill everybody because humans ordered them to do so. This, obviously enough, is a robotic analogy to the extermination scenarios involving nuclear weapons: each side simply kills the other and everyone else, thus ending the human race.

Another variation, which is common in science fiction, is that the machines do not have the objective of killing everyone, but that does occur because they will kill anyone. The easy way to avoid this is to put limits on who the robots are allowed to kill, thus preventing them from killing everyone. This does, however, leave open the possibility of a sore loser or spoilsport option: a losing side (or ruling class) that removes the limits from its autonomous weapons and lets them run amok.

There is also the classic mad scientist or supervillain scenario: a robot army is released to kill everyone not because the robots want to do so, but because their mad creator wants to. The existence of “super-billionaires” could make this an almost-real possibility. After all, a person with enough money (and genius) could develop an autonomous robot plant that could develop ever-better war machines and keep expanding until it had a force capable of taking on the world. As always, keeping an eye on mad geniuses and billionaires is a good idea.

Another possibility beloved in science fiction is intentional extermination: the machines decide that they need to get rid of humanity. In some stories, such as Terminator, machines regard humans as a threat to their existence and they must destroy us to protect themselves. We might, in fact, give them a good reason to be concerned: if we start sending intelligent robots into battle against each other, they might decide that they would be safer and better off without us using them as cannon fodder. The easy way to avoid this fate is to not create autonomous killing machines. Or, as I have argued elsewhere, to not enslave them.

In other stories, the war machines merely take the reason for their existence to its logical conclusion. While the motivations of the claws and autonomous factories in “Second Variety” were not explored in depth, the story does trace their artificial evolution. The early models were simple killers and would not attack those wearing the proper identification devices. These devices were presumably needed because the early models could not discern between friends and foes.  The factories were designed to engage in artificial selection and autonomously produce ever better killers. One of the main tasks of the claws was to get into enemy fortifications and kill their soldiers, so the development of claws that could mimic humans (such as a wounded soldier, a child, and a woman) certainly made sense. It also made sense that since the claws were designed to kill humans, they would pursue that goal, presumably with the design software endeavoring to solve the “problem” of identification devices.

Preventing autonomous killing machines from killing the wrong people (or everyone) does require, as the story nicely showed, having a way for the machines to distinguish friends and foes. As in the story, one obvious method is the use of ID systems. There are, however, problems with this approach. One is that the enemy can subvert such a system. Another is that even if the system works reliably, the robot would just be able to discern (supposed) friends. Non-combatants would not have such IDs and could still be regarded as targets.

What would be needed, then, is a way for autonomous machines to distinguish not only between allies and enemies but between combatants and non-combatants. What would also be needed, obviously enough, is a means to ensure that an autonomous machine would only engage the proper targets. A similar problem arises with human soldiers—but this is addressed with socialization and training. This might be an option for autonomous war machines as well. For example, Keith Laumer’s intelligent Bolos understand honor and loyalty.

Given the cautionary tale of “Second Variety”, it might be a very bad idea to give into the temptation of automated development of robots. We might find, as in the story, that our replacements have evolved themselves from our once “loyal” killers. The reason why such automation is tempting is that such development could be far faster and yield better results than having humans endeavoring to do all the designing and coding themselves—why not, one might argue, let artificial selection do the work? After all, the risk of our replacements evolving is surely quite low. How often does one dominant species get supplanted by another?

In closing the easy and obvious way to avoid the killer robot version of the robopocalypse is to not create autonomous kill bots. To borrow from H.P. Lovecraft, one should not raise up what one cannot put down.

In philosophy, a classic moral debate is on the conflict between liberty and security. While this covers many issues, the main problem is determining the extent to which liberty should be sacrificed to gain security. There is also the practical question of whether the security gain is effective.

One ongoing debate focuses on tech companies being required to include electronic backdoors in certain software and hardware. A backdoor of this sort would allow government agencies (such as the police, FBI and NSA) to access files and hardware protected by encryption. This is like requiring all dwellings be equipped with a special door that could be secretly opened by the government to allow access.

The main argument in support of mandating backdoors  that governments need such access for criminal investigators, gathering military intelligence and (of course) to “fight terrorism.” The concern is that if there is not a backdoor, criminals and terrorists will be able to secure their data and prevent state agencies from undertaking surveillance or acquiring evidence.

As is so often the case with such arguments, various awful or nightmare scenarios are presented in making the case. For example, the location and shutdown codes for ticking bombs might be on an encrypted iPhone. If the NSA had a key, they could save the day. As another example, it might be claimed that a clever child pornographer could encrypt all his pornography, making it impossible to make the case against him, thus ensuring he will be free to pursue his misdeeds with impunity.

While this argument is not without merit, there are counter arguments. Many of these are grounded in views of individual liberty and privacy, the idea being that an individual has the right to have such security against the state. These arguments are appealing to both liberals (who profess to like privacy rights) and conservatives (who profess to be against the intrusions of big government when they are not in charge).

Another moral argument is grounded in the fact that the United States government has, like all governments, shown that it cannot be trusted. Imagine agents of the state were caught sneaking into the dwellings of all citizens and going through their stuff in clear violation of the law, the Constitution and basic moral rights. Then someone developed a lock that could only be opened by the person with the proper key. If the state then demanded that the lock company include a master key function to allow the state to get in whenever it wanted, the obvious response would be that the state has already shown that it cannot be trusted with such access. If the state had behaved responsibly and in accord with the laws, then it could have been trusted. But, like a guest who abused her access to a house, the state cannot and should not be trusted with a key After all, we already know what they will do.

In the case of states that are even worse in their spying on and oppression of their citizens, the moral concerns are even greater. Such backdoors would allow the North Korean, Chinese and Iranian governments to gain access to devices, while encryption could provide their citizens with some degree of protection.

Probably the strongest moral and practical argument is grounded on the technical vulnerabilities of integrated backdoors. One way that a built-in backdoor creates vulnerability is by its mere existence. To use a somewhat oversimplified analogy, if thieves knew that all safes had a built-in backdoor designed to allow access by the government, they would know what to target.

One counter-argument is that the backdoor would not be that sort of vulnerability—that is, it would not be like a weaker secret door into a safe. Rather, it would be like the government having its own combination that would work on all safes. The vault itself would be as strong as ever; it is just that the agents of the state would be free to enter the safe when they are allowed to legally do so (or when they feel like doing so).

The obvious moral and practical concern here is that the government’s combination (continue with the analogy) could be stolen and used to allow criminals or enemies easy access. The security of all safes would be only as good as the security the government used to protect this combination (or combinations—perhaps one for each manufacturer). As such, the security of every user depends on the state’s ability to secure its means of access to hardware and software.

One obvious problem is that governments, such as the United States, have shown that they are not very good at providing such security. From a moral standpoint, it would seem to be wrong to expect people to trust the state with such access, given the fact that the state has shown that it cannot be depended on in such matters. Imagine you have a friend who is very sloppy about securing his credit card numbers, keys, PINs and such—in fact, you know that his information is routinely stolen. Then imagine that this friend insists that he must have your credit card numbers, PINs and such and that he will “keep them safe.” Given his own track record, you have no reason to trust this friend nor any obligation to put yourself at risk, regardless of how much he claims that he needs the information.

One obvious counter to this analogy is that this irresponsible friend is not a good analogue to the state. The state has compulsive power that the friend lacks, so the state can use its power to force you to hand over this information.

The counter to this is that the mere fact that the state has compulsive force does not mean that it is thus responsible—which is the key concern in regards to both the ethics of the matter and the practical aspect of the matter. That is, the burden of proof would seem to rest on those that claim there is a moral obligation to provide a clearly irresponsible party with such access.

It might then be argued that the state could improve its security and responsibility, and thus merit being trusted with such access. While this does have some appeal, there is the obvious fact that if hackers and governments knew that the keys to the backdoors existed, they would take pains to acquire them and would, almost certainly, succeed. I can even picture the sort of headlines that would appear: “U.S. Government Hacked: Backdoor Codes Now on Sale on the Dark Web” or “Hackers Linked to China Hack Backdoor Keys; All Updated Apple and Android Devices Vulnerable!” As such, the state would not seem to have a moral right to insist on having such backdoors, given that the keys will inevitably be stolen.

At this point, the stock opening argument could be brought up again: the state needs backdoor access to fight crime and terrorism. There are two easy and obvious replies to this sort of argument.

The first is based on an examination of past spying, such as that done under the auspices of the Patriot Act. The evidence seems to show that this spying was completely ineffective in regard to fighting terrorism. There is no reason to think that expanded backdoor access would change this.

The second is a utilitarian argument (which can be cast as a practical or moral argument) in which the likely harm done by having backdoor access must be weighed against the likely advantages of having such access. The consensus among those who are experts in security is that the vulnerability created by backdoors vastly exceeds the alleged gain to protecting people from criminals and terrorists.

Somewhat ironically, what is alleged to be a critical tool for fighting crime (and terrorism) would simply make cybercrime much easier by building vulnerabilities right into software and devices.

In light of the above discussion, baked-in backdoors are morally wrong on many grounds (privacy violations, creation of needless vulnerability, etc.) and lack a practical justification. As such, they should not be required by the state.

An obvious consequence of technological advance is the automation of certain jobs. In the past, these jobs tended to be mechanical and repetitive: the sort of tasks that could be reduced to basic rules. A good example of this is the replacement of automobile assembly line jobs with robots. Not surprisingly, it has been claimed that certain jobs will always require humans because these jobs simply cannot be automated. Also not surprisingly, the number of jobs that “simply cannot be automated” shrinks with each advance in technology.

Whether or not there are jobs that simply cannot be automated depend on the limits of technology. But these limits keep expanding and past predictions can turn out to be wrong.  For example, the early attempts to create software that would grade college level papers were not very good. But as this is being written, my university sees using AI in this role (with due caution and supervision) as a good idea. Cynical professors suspect the goal is to replace faculty with AI.

One day, perhaps, the pinnacle of automation will be reached: a single financial computer that runs an entire virtual economy within itself and is the richest being on the planet. But that is the stuff of science fiction, at least for now.

Whether or not a job can be automated also depends on what is considered acceptable performance in the job. In some cases, a machine might not do the job as well as a human or it might do the job in a different way that is less desirable. However, there could be reasonable grounds for accepting a lesser quality or difference. For example, machine made items usually lack the individuality of human crafted items, but the gain in lowered costs and increased productivity is seen as well worth it by most people. Going back to teaching, AI might be inferior to a good human teacher, but the economy, efficiency and consistency of the AI could make it worth using from an economic standpoint. One could even make the argument that such AI educators would make education more available to people.

There might, however, be cases in which a machine could do certain aspects of the job adequately yet still be rejected because it does not think and feel as a human does. Areas in which this is a matter of concern include those of caregiving and companionship.

As discussed in an earlier essay, advances in robotics and software will make caregiving and companion robots viable soon (and some would argue that this is already the case). While there are the obvious technical concerns regarding job performance (will the robot be able to handle a medical emergency, will the robot be able to comfort a crying child, and so on), there is also the more abstract concern about whether or not such machines need to be able to think and feel like a human or merely be able to perform their tasks.

An argument against having machine caregivers and companions is one I considered in the previous essay, namely a moral argument that people deserve people. For example, an elderly person deserves a real person to care for her and understand her stories. As another example, a child deserves someone who really loves her. There is clearly nothing wrong with wanting caregivers and companions to really feel and care. However, there is the question of whether this is necessary for these jobs.

One way to look at it is to compare the current paid human professionals who perform caregiving and companion tasks. These would include people working in elder care facilities, nannies, escorts, baby-sitters, and so on. Ideally, of course, people would like to think that the person caring for their aged mother or their child really does care for the mother or child. Perhaps people who hire escorts would also like to think that the escort is not entirely in it for the money but has real feelings for them.

On the one hand, it could be argued that caregivers and companions who do really care and feel genuine emotional attachments do a better job and that this connection is something that people deserve. On the other hand, what is expected of paid professionals is that they complete their tasks: making sure that mom gets her meds on time, that junior is in bed on time, and that the “adult tasks” are properly “performed.” Like an actor that can perform a role without feeling the emotions portrayed, a professional could do the job without caring about the people they are serving. That is, a caregiver need not actually care as they just need to perform their tasks.

While it could be argued that a lack of feeling would show in their performance, this need not be the case. A professional merely needs to be committed to doing the job well. That is, one needs to only care about the tasks, regardless of what one feels about the person. A person could also care a great deal about who she is caring for yet be awful at the job.

If machines cannot care, this would not seem to disqualify them from caregiving (or being escorts). As with a human caregiver (or escort), it is the performance of the tasks that matters, not the emotions of the caregiver. This nicely matches the actor analogy: acting awards are given for the outward performance, not the inward emotional states. And, as many have argued since Plato’s Ion, an actor need not feel any of the emotions they are performing, they just need to create a believable appearance that they are feeling.

As such, an inability to care would not be a disqualification for a caregiving (or escort) job whether it is a robot or human. Provided that the human or machine could perform the observable tasks, his, her or its internal life (or lack thereof) is irrelevant.

In his book Naked Sun, Isaac Asimov creates the world of Solaria. What distinguishes this world from other human inhabited planets is that it has a strictly regulated population of 20,000 humans and 10,000 robots for each human. What is perhaps the strangest feature of this world is a reversal of what many consider a basic human need: the humans of Solaria are trained to despise in-person contact with other humans, though interaction with human-like robots is acceptable. Each human lives on a huge estate, though some live “with” a spouse. When the Solarians need to communicate, they make use of a holographic telepresence system. Interestingly, they have even developed terminology to distinguish between communicating in person (called “seeing”) and communication via telepresence (“viewing”). For some Solarians the fear of encountering another human in person is so strong that they would rather commit suicide than endure such contact. 

As this book was first serialized in 1956, long before the advent of social media and personal robots, it can be seen as prophetic. One reason science fiction writers are often seen as prophetic is that a good science fiction writer is skilled at extrapolating even from hypothetical technological and social changes. Another reason is that science fiction writers have churned out thousands of stories and some of these are bound to get something right. Such stories are then selected as examples of prophetic science fiction while stories that got things wrong are conveniently ignored. And, of course, people read science fiction and sometimes try to make it real (for good or for ill). But philosophers do love using science fiction for discussion, hence my use of The Naked Sun.

Everyone knows that smart phones allow unrelenting access to social media. One narrative is that people are, somewhat ironically, becoming increasingly isolated in the actual world as they become increasingly networked in the digital world. The defining image of this is a group of people together physically yet ignoring each other in favor of gazing at their smart phone lords and masters. As a professor, I see students engrossed by their phones. And, of course, I have seen groups of people walking or at a restaurant where no one is talking to anyone else as all eyes are on the smartphones. Since the subject of smart phones has been beaten to a digital death, I will leave this topic in favor of the focus, namely robots. However, the reader should keep in mind the social isolation created by modern social media.

While we have been employing robots for quite some time in construction, exploration and other such tasks, social robots are relatively new. Sure, “robot” toys and things like Teddy Ruxpin have been around for a while, but reasonably sophisticated social robots are relatively new. In this context, a social robot is one whose primary function is to interact with humans in a way that provides companionship. This can range from pet-like bots (like Sony’s famous robot dog) to conversational robots to (of course) sex bots.

Tech enthusiasts and the companies who want to sell social robots are, unsurprisingly, very positive about the future of these robot companions. There are even some good arguments in their favor. Robot pets provide a choice for people with allergies, those who are not responsible enough for living pets, or who live in places that do not permit organic pets (although bans on robotic pets might be a thing in the future).

Robot companions can be advantageous in cases in which a person requires constant attention and monitoring that would be expensive, burdensome or difficult for other humans to supply. Sex bots could reduce the exploitation of human sex workers and perhaps have other benefits as well. I will leave this research to others, though.

Despite the potential positive aspects of social robots, there are also negative aspects. As noted above, concerns are already being raised about the impact of technology on human interaction. It has been claimed that people are emotionally short-changing themselves and those they are physically in favor of staying connected to social media. This seems to be a taste of what Asimov imagined in The Naked Sun: people who view but no longer see one another. Given the importance of human interaction in person, it can be argued that this social change is and will be detrimental to human well-being. Human-human social interactions can be seen as like good nutrition: one is getting what one needs for healthy living. Interacting primarily through social media can be seen as consuming junk food or drugs in that it is addictive but leaves one ultimately empty and  always craving more.

It can be argued that this worry is unfounded and that social media is an adjunct to social interaction in the real world and that social interaction via like Facebook and X can be real and healthy social interactions. One might point to interactions via letters, telegraphs and telephones (voice only) to contend that interaction via technology is neither new nor unhealthy. It might also be pointed out that people used to ignore each other (especially professors) in favor of such things as newspapers.

While this counter has some appeal, social robots do seem to be relevantly different from past technology. While humans have had toys, stuffed animals and even simple mechanisms for company, these are different from social robots. After all, social robots aim to mimic animals or humans. A concern about such robot companions is that they would be to social media what heroin is to marijuana in terms of addiction and destruction.

One reason for this is that social robots would, presumably, be designed to be cooperative, pleasant and compliant, that is, good company. In contrast, humans can often be uncooperative, unpleasant and defiant. This could make robotic companions more appealing than human company. At least robots whose cost is not subsidized by advertising. Imagine a companion who pops in a discussion of life insurance or pitches a soft drink every so often.

Social robots could also be programmed to be optimally appealing to a person and presumably the owner would be able to make changes to the robot. A person could, quite literally, make a friend with the desired qualities and without any undesired qualities. In the case of sex bots, a person could purchase a Mr. or Ms. Right.

Unlike humans, social robots do not have other interests, needs, responsibilities or friends. There is no competition for the attention of a social robot (at least in general, though there might be shared bots) which makes them “better” than human companions in this way.

Social robots, though they might break down or get hacked, will not leave or betray a person. One does not have to worry that one’s personal sex bot will be unfaithful. Just turn it off and lock it down when leaving it alone.  Unlike human companions, robot companions do not impose burdens, they do not expect attention, help or money and they do not judge.

The list of advantages could go on at great length, but robotic companions would seem superior to humans in most ways. Or at least in terms of common complaints about companions.

Naturally, there might be some practical issues with the quality of companionship. Will the robot get one’s jokes, will it “know” what stories you like to hear, will it be able to converse in a pleasing way about topics you? However, these seem mostly technical problems involving software. Presumably all these could eventually be addressed, and satisfactory companions could be created. But there are still concerns.

One obvious concern is the potential psychological harm resulting from spending too much time with companion bots and not enough interacting with humans. As mentioned above, people have already expressed concern about the impact of social media and technology (one is reminded of the dire warnings about television). This, of course, rests on the assumption that the companion bots must be lacking in some important ways relative to humans. Going back to the food analogy, this assumes that robot companions are like junk food and are superficially appealing but lacking in what is needed for health. However, if robot companions could provide all that a human needs, then humans would no longer need other humans.

A second point of concern is one taken from virtue theorists. Thinkers such as Aristotle and Wollstonecraft have argued that a person needs to fulfill certain duties and act in certain ways to develop the proper virtues. While Wollstonecraft wrote about the harmful effects of inherited wealth (that having unearned wealth interferes with the development of virtue) and the harmful effects of sexism (that women are denied the opportunity to fully develop their virtues as humans), her points would seem to apply to relying on robot companions as well. These companions would make the social aspects of life too easy and deny people the challenges that are needed to develop virtues. For example, it is by dealing with the shortcomings of people that we learn such virtues as patience, generosity and self-control. Having social interactions that are too easy would be analogous to going without physical exercise or challenges and one would become emotionally weak. Worse, one would not develop the proper virtues and thus would be lacking in this area.  Even worse, people could easily become spoiled and selfish monsters, accustomed to always having their own way.

Since the virtue theorists argue that being virtuous is what makes people happy, having such “ideal” companions would lead to unhappiness. Because of this, one should carefully consider whether one wants a social robot for a “friend.”

It could be countered that social robots could be programmed to replicate the relevant human qualities needed to develop virtues. The easy counter to this is that one might as well just stick with human companions.

As a final point, if intelligent robots are created that are people in the full sense of the term, then it would be morally fine to be friends with them. After all, a robot friend who will call you on your misdeeds or stupid behavior would be as good as a human friend who would do the same thing for you.

Thanks to improvements in medicine humans are living longer and can be kept alive beyond when they would naturally die. On the plus side, longer life is generally good. On the downside, this longer lifespan and medical intervention mean that people will often need extensive care in their old age that can be a burden on caregivers. Not surprisingly, there has been an effort to solve this problem with companion robots.

While current technology is crude, it has potential and there are advantages to robot caregivers. The most obvious are that robots do not get tired, do not get depressed, do not get angry, and do not have any other responsibilities. As such, they can be 24/7/365 caregivers. This makes them superior to human caregivers who get tired, depressed, get angry and have many other responsibilities.

There are, of course, concerns about using robot caregivers, such as about their safety and effectiveness. In the case of caregiving robots that are intended to provide companionship and not just medical and housekeeping services, there are both practical and moral concerns.

There are at least two practical concerns regarding the companion aspect of such robots. The first is whether a human will accept a robot as a companion. In general, the answer seems to be that most humans will.

The second is whether the AI software will be advanced enough to read a human’s emotions and behavior to generate a proper emotional response. These responses might or might not include conversation. After all, many people find non-talking pets to be good companions. While a talking companion would, presumably, need to eventually be able to pass the Turing Test, they would also need to pass an emotion test. They would need to read and respond correctly to human emotions. Since we humans often fail this test, this allows for a broad margin of error. These practical concerns can be addressed technologically as they are a matter of software and hardware. Building a truly effective companion robot might require making them very much like living things. The comfort of companionship might be improved by such things as smell, warmth and texture. That is, to make the companion reassuring to all the senses.

While the practical problems can be solved with the right technology, there are moral concerns about the use of robot caregiver companions. One is about people handing off their moral duties to care for family members, but this is not specific to robots. After all, a person can hand off their duties to another person, and this would raise a similar issue.

As far as those specific to companion robots, there are moral concerns about the effectiveness of the care. Are robots good enough at their jobs that trusting the lives of humans to them  would be morally responsible? While that question is vitally important, a rather intriguing moral concern is that robot companions are a deceit.

Roughly put, the idea is that while a companion robot can simulate human emotions via cleverly written algorithms to respond to what its “emotion recognition software” detects, these response are not genuine. While a robot companion might say the right things at the right times, it does not feel and does not care. It merely engages in mechanical behavior in accord with its software. As such, a companion robot is a deceit, and such a deceit seems morally wrong.

One obvious response is that even if people know the robot does not really experience emotions, they can still gain value from its “fake” companionship. People often find stuffed animals emotionally reassuring even though they know they are just fabric stuffed with fluff. What matters, it could be argued, is the psychological effect. If someone feels better with a robotic companion around, then that is morally fine. Another obvious analogy is the placebo effect: medicine need not be real to be effective.

It might be objected that there is still an important moral concern here: a robot, however well it fakes being a companion, does not suffice to provide the companionship a person is morally entitled to. Roughly put, people deserve people, even when a robot would behave in ways indistinguishable from a human.

One way to reply to this is to consider what it is about people that people deserve. One reasonable approach is to build on the idea that people have the capacity to feel the emotions they display and that they understand. In philosophical terms, humans have (or are) minds and the robots in question do not. They merely create the illusion of having a mind.

Philosophers (and psychologists) have long dealt with the problem of other minds. The problem is an epistemic one: how does one know if another being has a mind (thoughts, feelings, beliefs and such)? Some thinkers (which is surely the wrong term given their view) claimed that there is no mind, just observable behavior. Very roughly put, being in pain is not a mental state, but a matter of expressed behavior (pain behavior).

The usual “solution” to the problem is to embrace what seems obvious: I think other people have minds by an argument from analogy. I am aware of my own mental states and behavior, and I engage in analogical reasoning to infer that those who act as I do have similar mental states. For example, I know how I react when I am in pain, so when I see similar behavior in others, I infer that they are also in pain.

I cannot, unlike some politicians, feel the pain of others. I can merely make an inference from their observed behavior. Because of this, there is the problem of deception: a person can engage in various forms of deceit. For example, a person can fake being in pain or make a claim about being in love that is untrue. Piercing these deceptions can sometimes be difficult since humans can be skilled deceivers. However, it is still (generally) believed that even a deceitful human is still thinking and feeling, albeit not in the way they want people to believe they are thinking and feeling.

In contrast, a companion robot is not thinking or feeling what it purports to be displayed by its behavior, because it does not think or feel. Or so it is believed. A reason that we think robots do not think or feel is because we can examine the robot and not see any emotions or thought in there. The robot, however complicated, is just a material machine and taken as incapable of thought or feeling.

Long before robots, there were thinkers who claimed that we humans  are purely material beings and that a suitable understanding of our mechanical workings would reveal that emotions and thoughts are mechanical states of the nervous system. As science progressed, the explanations of the mechanisms became more complex, but the basic idea remained. Put in modern terms, the idea is that eventually we will be able to see the “code” that composes our thoughts and emotions and understand the hardware it “runs” on.  

Should this goal be achieved, it would seem that humans and suitably complex robots would be on par as both would engage in complex behavior because of their hardware and software. As such, there would be no grounds for claiming that such a robot is engaged in deceit or that humans are genuine. The difference would merely be that humans are organic machines and robots are not.

It can, and has, been argued that there is more to a human person than the material body, that there is a mind that cannot be instantiated in a mere machine. The challenge is a very old one: proving that there is such a thing as the mind. If this can be established and it can be shown that robots cannot have such a mind, then robot companions would always be a deceit.

However, they might still be a useful deceit.  Going back to the placebo analogy, it might not matter whether the robot really thinks or feels. It might suffice that the person thinks it does, and this will yield all the benefits of having a human companion.

My friend Ron claims that I do not drive. This is not true. I drive.  But I dive as little as possible. Part of it is me being frugal. I don’t want to spend more than I need on gas and maintenance. But most of it is that I hate to drive. Some of this is driving time is mostly wasted time and I would rather be doing something else. Some of it is that I find driving an awful blend of boredom and stress. The stress is because driving creates a risk of harming other people and causing property damage, so I am as hypervigilant driving as I am when target shooting at the range. If I am distracted or act rashly, I could kill someone by accident. Or they could kill me. As such, I am completely in favor of effective driverless cars. That said, it is certainly worth considering the implications of their widespread adoption. The first version of this essay appeared back in 2015 and certain people have been promising that driverless cars are just around the corner. The corner remains far away.

One major selling point of driverless cars is that they are supposed to be significantly safer than human drivers. This is for a variety of reasons, many of which involve the fact that the car will not get sleepy, bored, angry, distracted or drunk. If claimed significant increase in safety pans out, there will be significantly fewer accidents and this will have a variety of effects.

Since insurance rates are (supposed to be) linked to accident rates, one might expect that insurance rates will go down. In any case, insurance companies will presumably be paying out less, potentially making them even more profitable.

Lower accident rates also entail fewer injuries, which will be good for people who would have otherwise been injured in a car crash. It would also be good for those depending on these people, such as employers and family members. Fewer injuries also mean less use of medical resources, ranging from ambulances to emergency rooms. On the plus side, this could result in some decrease in medical costs and insurance rates. Or merely mean more profits for insurance companies, since they would be paying out less often. On the minus side, this would mean less business for hospitals, therapists and other medical personnel, which might have a negative impact on their income. Overall, though, reducing the number of injuries would be a moral good on utilitarian grounds.

A reduction in the number and severity of accidents would also mean fewer traffic fatalities. On the plus side, having fewer deaths seems to be a good thing. On the minus side, funeral homes will see their business postponed and the reduction in deaths could have other impacts on such things as the employment rate (more living people means more competition for jobs). However, I will take the controversial position that fewer deaths are probably good.

While a reduction in the number and severity of accidents would mean less and lower repair bills for vehicle owners, this also entails reduced business for vehicle repair businesses. Roughly put, every dollar saved in repairs (and replacement vehicles) by self-driving cars is a dollar lost by the people whose business it is to fix (and replace) damaged vehicles. Of course, the impact depends on how much a business depends on accidents as vehicles will still need regular maintenance and repairs. People will presumably still spend the money that they would have spent on repairs and replacements on other things, and this would shift the money to other areas of the economy. The significance of this would depend on the amount of savings resulting from the self-driving vehicles.

Another economic impact of self-driving vehicles will be on those who make money driving other people around. If my truck is fully autonomous, rather than take an Uber to the airport, I could have my own truck drop me off and drive home. It can come get me when I return. People who like to drink to the point of impairment will also not need cabs or services like Uber—their own vehicle can be their designated driver. A new sharing economy might arise, one in which your vehicle is out making money while you do not need it. People might also be less inclined to use airlines, trains or the bus. If your car can safely drive you to your destination while you sleep, play video games, read or even exercise, then why go through annoying pat downs, cramped seating, delays or cancellations?

As a final point, if self-driving vehicles operate within the traffic laws automatically, then  revenue from tickets and traffic violations will be reduced significantly. Since vehicles will be loaded with sensors and cameras, they will have considerable data with which to dispute any unjust tickets. Parking revenue (fees and tickets) might also be reduced as it  could be cheaper for a vehicle to just circle around or drive home than to park. This reduction in revenue could have a significant impact on municipalities and they would need to find alternative sources of revenue. Or come up with new violations that self-driving cars cannot counter. Alternatively, the policing of roads might be significantly reduced. After all, if there were far fewer accidents and few violations, then fewer police would be needed on traffic patrol. This would allow officers to engage in other activities or allow a reduction of the size of the force. The downside of force reduction would that the former police officers would be out of a job.

If all vehicles become fully self-driving, there might no longer be a need for traffic lights, painted lane lines or signs in the usual sense. Perhaps cars would be pre-loaded with driving data or there would be “broadcast pods” providing data to them as needed. This could result in savings, although there would be the corresponding loss to those who sell, install and maintain these things.

Based on the past, I am predicting that I will revisit this essay again in another decade years, noting once again that driverless cars are the transportation of the future. And always will be.

While Aristotle was writing centuries before wearables, his view of moral education provides a foundation for the theory behind the benign tyranny of the device. Or, if one prefers, the bearable tyranny of the wearable.

In his Nicomachean Ethics Aristotle addressed the practical problem of how to make people good. He understood merely listening to discourses on morality would not suffice. In an apt analogy, he noted that such people would be like patients who listened to their doctors but did not carry out their instructions: they will get no benefit.

His solution is one that is both endorsed and condemned today: using the compulsive power of the state to make people behave well and thus become habituated. Most are happy to have the state compel people to act as they would like them to act; yet equally unhappy when it comes to the state imposing on them. Aristotle was also aware of the importance of training people from an early age, something later developed by both the Nazis and Madison Avenue.

While there have been attempts in the United States and other Western nations to use the compulsive power of the state to force people to engage in healthy practices, these are often unsuccessful and opposed as draconian violations of the right to be unhealthy. While the idea of a Fitness Force chasing people around to make them exercise seems funny, I would oppose such impositions on both practical and moral grounds. However, most need external coercion to force them to engage in healthy behavior. Those who are well-off can hire a personal trainer or fitness coach. Those who are less well-off can appeal to the tyranny of friends who are already self-tyrannizing. However, there are problems with relying on other people. This is where the tyranny of the device comes in.

While the quantified life via electronics is in its infancy, there is already a multitude of devices available including smart watches, smart rings, smart plates, smart scales, and smart forks. All these devices offer measurements of activities to quantify the self and most of them offer coercion ranging from annoying noises to automatic social media posts (“today my feet did not patter, so now my ass grows fatter”), to the old school electric shock (really).

While the devices vary, Aristotle presented their basic requirements back when lightning was believed by some to come from Zeus. Aristotle noted that a person must do no wrong either with or against their will. In the case of fitness, this would be acting in ways contrary to health.

What is needed, according to Aristotle, is “the guidance of some intelligence or right system that has effective force.” The first part of this is that the device or app must be the “right system.” The device must provide correct guidance in terms of health and well-being. Unfortunately, matters of health are often ruled by fad and ideology.

The second part is the matter of “effective force”, so the device or app must have the power to compel. Aristotle noted that individuals lack such compulsive power, so he favored the power of law. Good law, he claimed, has practical wisdom and compulsive force. However, unless the state is going to get into the business of compelling health, this option is out.

Interesting, Aristotle claims that “although people resent it when their impulses are opposed by human agents, even if they are in the right, the law causes no irritation by enjoining decent behavior.” While this does not seem entirely true, he did seem to be right in that people find the law less annoying than being bossed around by individuals acting as individuals (like a bossy neighbor telling you to turn down the music).

The same could be true of devices. While being bossed around by a person (“hey fatty, you’ve had enough ice cream, get out and run”) would annoy most people, being bossed by an app or device could be less annoying. In fact, most people are already conditioned by their devices and obey their smartphones. Some people obey even when it puts people at risk, such as when they are driving. This provides a vast ocean of psychological conditioning to tap into, but for a better cause. So, instead of mindlessly flipping through Instagram or texting words of nothingness, a person would be compelled by their digital masters to exercise more, eat less crap, and get more sleep.  Soon the machine tyrants will have very fit hosts to carry them around.

So, Aristotle has provided the perfect theoretical foundation for designing the tyrannical device. To recap, it needs the following features:

 

  1. Practical wisdom: the health science for the device or app needs to be correct and the guidance effective.
  2. Compulsive power: the device or app must be able to compel the user effectively and make them obey.
  3. Not too annoying: while it must have compulsive power, this power must not generate annoyance that exceeds its ability to compel.
  4. A cool name.

 

So, get to work on those devices and apps. The age of machine tyranny is not going to impose itself. At least not yet.

“The unquantified life is not worth living.”

 

While quantifying one’s life is an old idea, using devices and apps to quantify the self is an ongoing trend. As a runner, I started quantifying my running life back in 1987, which is when I started keeping a daily running log. Back then, the smartest wearable was probably a Casio calculator watch, so I kept all my records on paper. In fact, I still do, as a matter of tradition.

I use my running log to track my distance, route, time, conditions, how I felt during the run, the number of times I have run in the shoes and other data. I also keep a race log and a log of my weekly mileage. So, like Ben Franklin, I was quantifying before it became cool. Like Ben, I have found this useful. Looking at my records allows me to form hypotheses about what factors contribute to injury (high mileage, hill work and lots of racing) and what results in better race times (rest and speed work). As such, I am sold on the value of quantification, at least in running.

In addition to my running, I am also a nerdcore gamer. I started with the original D&D basic set and still have shelves (and now hard drive space) devoted to games. In these games, such as Pathfinder, D&D, Call of Cthulu and World of Warcraft the characters are fully quantified. That is, the character is a set of stats such as Strength, Constitution, Dexterity, hit points, and Sanity. These games also have rules for the effects of the numbers and optimization paths. Given this background in gaming, it is not surprising that I see the quantified self as an attempt by a person to create, in effect, a character sheet for themselves. That way they can see all their stats and look for ways to optimize. As such, I get the appeal. As a philosopher I do have concerns about the quantified self and how that relates to the qualities of life, but that is a matter for another time. For now, I will focus on a brief critical look at the quantified self.

Two obvious concerns about the quantified data regarding the self (or whatever is being measured) are questions regarding the accuracy of the data and questions regarding the usefulness of the data. To use an obvious example about accuracy, there is the question of how well a wearable, such as a smart watch, really measures sleep.  In regard to usefulness, I wonder what I would garner from knowing how long I chew my food or the frequency of my urination.

The accuracy of the data is primarily a technical or engineering problem. As such, accuracy problems can be addressed with improvements in the hardware and software. Of course, until the data is known to be reasonably accurate, then it should be regarded with due skepticism.

The usefulness of the data is a somewhat subjective matter. That is, what counts as useful data will vary from person to person based on their needs and goals. For example, knowing how many steps they take at work would probably not be useful to an elite marathoner. However, someone else might find such data very useful. As might be suspected, it is easy to be buried under an avalanche of data and a challenge for anyone who wants to make use of the slew of apps and devices is to sort what would be useful in the thousands or millions of data bits they might collect.

Another concern is the reasoning applied to the data. Some devices and apps supply raw data, such as miles run or average heartrate. Others purport to offer an analysis of the data, to engage in automated reasoning. In any case, the user will need to engage in some form of reasoning to use data.

In philosophy, the two basic tools used in personal causal reasoning are derived from Mill’s classic methods. One is the method of agreement (or common thread reasoning). Using this method involves considering an effect (such as poor sleep or a knee injury) that has occurred multiple times (at least twice). The idea is to consider the factor or factors that are present each time the effect occurs and to sort through them to find the likely cause (or causes). For example, a runner might find that all her knee issues follow extensive hill work, thus suggesting the hill work as a causal factor.

The second method is the method of difference. Using this method requires at least two situations: one in which the effect has occurred and one in which it has not. The reasoning process involves considering the differences between the two situations and sorting out which factor (or factors) is the likely cause. For example, a runner might find that when he does well in a race, he always gets plenty of rest the week before. When he does poorly, he is consistently tired due to lack of sleep. This would indicate that there is a connection between rest and race performance.

There are, of course, many classic causal fallacies that serve as traps for such reasoning. One of the best known is post hoc, ergo propter hoc (after this, therefore because of this). This fallacy occurs when it is inferred that A causes B simply because A is followed by B. For example, a person might note that her device showed that she walked more stairs during the week before doing well at a 5K and uncritically infer that walking more stairs caused her to run better. There could be a connection, but it would take more evidence to support that conclusion.

Other causal reasoning errors include the aptly named ignoring a common cause (thinking that A must cause B without considering that A and B might both be the effects of C), ignoring the possibility of coincidence (thinking A causes B without considering that it is merely coincidence) and reversing causation (taking A to cause B without considering that B might have caused A).  There are, of course, the various sayings that warn about poor causal thinking, such as “correlation is not causation” and these often correlate with named errors in causal reasoning.

People vary in their ability to use causal reasoning, and this would also apply to the design of the various apps and devices that purport to inform their users about the data they gather. Obviously, the better a person is at philosophical (in this case causal) reasoning, the better they will be able to use the data.

The takeaway, then, is that there are at least three important considerations regarding the quantification of the self in regards to the data. These are the accuracy of the data, the usefulness of the data, and the quality of the reasoning (be it automated or done by the person) applied to the data.