“The unquantified life is not worth living.”
While quantifying one’s life is an old idea, using devices and apps to quantify the self is an ongoing trend. As a runner, I started quantifying my running life back in 1987, which is when I started keeping a daily running log. Back then, the smartest wearable was probably a Casio calculator watch, so I kept all my records on paper. In fact, I still do, as a matter of tradition.
I use my running log to track my distance, route, time, conditions, how I felt during the run, the number of times I have run in the shoes and other data. I also keep a race log and a log of my weekly mileage. So, like Ben Franklin, I was quantifying before it became cool. Like Ben, I have found this useful. Looking at my records allows me to form hypotheses about what factors contribute to injury (high mileage, hill work and lots of racing) and what results in better race times (rest and speed work). As such, I am sold on the value of quantification, at least in running.
In addition to my running, I am also a nerdcore gamer. I started with the original D&D basic set and still have shelves (and now hard drive space) devoted to games. In these games, such as Pathfinder, D&D, Call of Cthulu and World of Warcraft the characters are fully quantified. That is, the character is a set of stats such as Strength, Constitution, Dexterity, hit points, and Sanity. These games also have rules for the effects of the numbers and optimization paths. Given this background in gaming, it is not surprising that I see the quantified self as an attempt by a person to create, in effect, a character sheet for themselves. That way they can see all their stats and look for ways to optimize. As such, I get the appeal. As a philosopher I do have concerns about the quantified self and how that relates to the qualities of life, but that is a matter for another time. For now, I will focus on a brief critical look at the quantified self.
Two obvious concerns about the quantified data regarding the self (or whatever is being measured) are questions regarding the accuracy of the data and questions regarding the usefulness of the data. To use an obvious example about accuracy, there is the question of how well a wearable, such as a smart watch, really measures sleep. In regard to usefulness, I wonder what I would garner from knowing how long I chew my food or the frequency of my urination.
The accuracy of the data is primarily a technical or engineering problem. As such, accuracy problems can be addressed with improvements in the hardware and software. Of course, until the data is known to be reasonably accurate, then it should be regarded with due skepticism.
The usefulness of the data is a somewhat subjective matter. That is, what counts as useful data will vary from person to person based on their needs and goals. For example, knowing how many steps they take at work would probably not be useful to an elite marathoner. However, someone else might find such data very useful. As might be suspected, it is easy to be buried under an avalanche of data and a challenge for anyone who wants to make use of the slew of apps and devices is to sort what would be useful in the thousands or millions of data bits they might collect.
Another concern is the reasoning applied to the data. Some devices and apps supply raw data, such as miles run or average heartrate. Others purport to offer an analysis of the data, to engage in automated reasoning. In any case, the user will need to engage in some form of reasoning to use data.
In philosophy, the two basic tools used in personal causal reasoning are derived from Mill’s classic methods. One is the method of agreement (or common thread reasoning). Using this method involves considering an effect (such as poor sleep or a knee injury) that has occurred multiple times (at least twice). The idea is to consider the factor or factors that are present each time the effect occurs and to sort through them to find the likely cause (or causes). For example, a runner might find that all her knee issues follow extensive hill work, thus suggesting the hill work as a causal factor.
The second method is the method of difference. Using this method requires at least two situations: one in which the effect has occurred and one in which it has not. The reasoning process involves considering the differences between the two situations and sorting out which factor (or factors) is the likely cause. For example, a runner might find that when he does well in a race, he always gets plenty of rest the week before. When he does poorly, he is consistently tired due to lack of sleep. This would indicate that there is a connection between rest and race performance.
There are, of course, many classic causal fallacies that serve as traps for such reasoning. One of the best known is post hoc, ergo propter hoc (after this, therefore because of this). This fallacy occurs when it is inferred that A causes B simply because A is followed by B. For example, a person might note that her device showed that she walked more stairs during the week before doing well at a 5K and uncritically infer that walking more stairs caused her to run better. There could be a connection, but it would take more evidence to support that conclusion.
Other causal reasoning errors include the aptly named ignoring a common cause (thinking that A must cause B without considering that A and B might both be the effects of C), ignoring the possibility of coincidence (thinking A causes B without considering that it is merely coincidence) and reversing causation (taking A to cause B without considering that B might have caused A). There are, of course, the various sayings that warn about poor causal thinking, such as “correlation is not causation” and these often correlate with named errors in causal reasoning.
People vary in their ability to use causal reasoning, and this would also apply to the design of the various apps and devices that purport to inform their users about the data they gather. Obviously, the better a person is at philosophical (in this case causal) reasoning, the better they will be able to use the data.
The takeaway, then, is that there are at least three important considerations regarding the quantification of the self in regards to the data. These are the accuracy of the data, the usefulness of the data, and the quality of the reasoning (be it automated or done by the person) applied to the data.

According to my iron rule of technology, any technology that can be misused will be misused. Drones are no exception. While law-abiding citizens and law writing corporations have been finding legal uses for drones, enterprising folks have been finding other uses. These include deploying drones as peeping toms and using them to transport drugs. The future will see even more criminals (inside and outside governments) using drones for their crimes.
Small. Silent. Deadly. The perfect assassin or security system for the budget conscious. Send a few after your enemy. Have a few lurking about in security areas. Make your enemies afraid. Why drop a bundle on a bug, when you can have a Tarantula?
While the notion of punishing machines for misdeeds has received some attention in science fiction, it seems worthwhile to take a brief philosophical look at this matter. This is because the future, or so some rather smart people claim, will see the rise of intelligent machine, machines that do things that would be misdeeds or crimes if committed by a human.
Philosophers have long speculated about autonomy and agency, but the development of autonomous systems has made such speculation even more important. Keeping things simple, an autonomous system is capable of operating independent of direct human control. Autonomy comes in degrees of independence and complexity. It is the capacity for independent operation that distinguishes autonomous systems from those controlled externally.
Human flesh is weak, and metal is strong. So, it is no surprise that military science fiction includes cyborg soldiers. An example of a minor cybernetic is an implanted radio. The most extreme example would be a full body conversion: the brain is removed from the original body and placed in a mechanical body. This body might look like a human (known as a Gemini full conversion in Cyberpunk) or be a vehicle such as a tank, as in Keith Laumer’s A Plague of Demons.
Humans have limitations that make us less than ideal weapons of war. For example, we get tired and need sleep. As such, it is no surprise militaries have sought various ways to augment humans to counter these weaknesses. For example, militaries use caffeine and amphetamines to keep their soldiers awake and alert. There have also been experiments in other forms of improvement.
Science fiction abounds with stories of enhanced soldiers such as Captain America and the Space Marines of Warhammer 40K. The real-world augmentation of soldiers raises a moral concern about informed consent. While fiction abounds with tales of involuntary augmentation, real soldiers and citizens of the United States have also been
Military science fiction often includes powered exoskeletons, also known as exoframes, exosuits or powered armor. A basic exoskeleton is a powered framework providing the wearer with enhanced strength. In movies such as Edge of Tomorrow and video games such as Call of Duty Advanced Warfare the exoskeleton provides improved mobility and carrying capacity but do not provide much armor. In contrast, powered armor provides the benefits of an exoskeleton while also providing protection. The powered armor of Starship Troopers, The Forever War, Armor and Iron Man all serve as classic examples of this sort of gear. The Space Marines of Warhammer 40K and the Sisters of Battle also wear powered armor. While the sisters are “normal” humans, the Space Marines are enhanced super soldiers.