“The unquantified life is not worth living.”

 

While quantifying one’s life is an old idea, using devices and apps to quantify the self is an ongoing trend. As a runner, I started quantifying my running life back in 1987, which is when I started keeping a daily running log. Back then, the smartest wearable was probably a Casio calculator watch, so I kept all my records on paper. In fact, I still do, as a matter of tradition.

I use my running log to track my distance, route, time, conditions, how I felt during the run, the number of times I have run in the shoes and other data. I also keep a race log and a log of my weekly mileage. So, like Ben Franklin, I was quantifying before it became cool. Like Ben, I have found this useful. Looking at my records allows me to form hypotheses about what factors contribute to injury (high mileage, hill work and lots of racing) and what results in better race times (rest and speed work). As such, I am sold on the value of quantification, at least in running.

In addition to my running, I am also a nerdcore gamer. I started with the original D&D basic set and still have shelves (and now hard drive space) devoted to games. In these games, such as Pathfinder, D&D, Call of Cthulu and World of Warcraft the characters are fully quantified. That is, the character is a set of stats such as Strength, Constitution, Dexterity, hit points, and Sanity. These games also have rules for the effects of the numbers and optimization paths. Given this background in gaming, it is not surprising that I see the quantified self as an attempt by a person to create, in effect, a character sheet for themselves. That way they can see all their stats and look for ways to optimize. As such, I get the appeal. As a philosopher I do have concerns about the quantified self and how that relates to the qualities of life, but that is a matter for another time. For now, I will focus on a brief critical look at the quantified self.

Two obvious concerns about the quantified data regarding the self (or whatever is being measured) are questions regarding the accuracy of the data and questions regarding the usefulness of the data. To use an obvious example about accuracy, there is the question of how well a wearable, such as a smart watch, really measures sleep.  In regard to usefulness, I wonder what I would garner from knowing how long I chew my food or the frequency of my urination.

The accuracy of the data is primarily a technical or engineering problem. As such, accuracy problems can be addressed with improvements in the hardware and software. Of course, until the data is known to be reasonably accurate, then it should be regarded with due skepticism.

The usefulness of the data is a somewhat subjective matter. That is, what counts as useful data will vary from person to person based on their needs and goals. For example, knowing how many steps they take at work would probably not be useful to an elite marathoner. However, someone else might find such data very useful. As might be suspected, it is easy to be buried under an avalanche of data and a challenge for anyone who wants to make use of the slew of apps and devices is to sort what would be useful in the thousands or millions of data bits they might collect.

Another concern is the reasoning applied to the data. Some devices and apps supply raw data, such as miles run or average heartrate. Others purport to offer an analysis of the data, to engage in automated reasoning. In any case, the user will need to engage in some form of reasoning to use data.

In philosophy, the two basic tools used in personal causal reasoning are derived from Mill’s classic methods. One is the method of agreement (or common thread reasoning). Using this method involves considering an effect (such as poor sleep or a knee injury) that has occurred multiple times (at least twice). The idea is to consider the factor or factors that are present each time the effect occurs and to sort through them to find the likely cause (or causes). For example, a runner might find that all her knee issues follow extensive hill work, thus suggesting the hill work as a causal factor.

The second method is the method of difference. Using this method requires at least two situations: one in which the effect has occurred and one in which it has not. The reasoning process involves considering the differences between the two situations and sorting out which factor (or factors) is the likely cause. For example, a runner might find that when he does well in a race, he always gets plenty of rest the week before. When he does poorly, he is consistently tired due to lack of sleep. This would indicate that there is a connection between rest and race performance.

There are, of course, many classic causal fallacies that serve as traps for such reasoning. One of the best known is post hoc, ergo propter hoc (after this, therefore because of this). This fallacy occurs when it is inferred that A causes B simply because A is followed by B. For example, a person might note that her device showed that she walked more stairs during the week before doing well at a 5K and uncritically infer that walking more stairs caused her to run better. There could be a connection, but it would take more evidence to support that conclusion.

Other causal reasoning errors include the aptly named ignoring a common cause (thinking that A must cause B without considering that A and B might both be the effects of C), ignoring the possibility of coincidence (thinking A causes B without considering that it is merely coincidence) and reversing causation (taking A to cause B without considering that B might have caused A).  There are, of course, the various sayings that warn about poor causal thinking, such as “correlation is not causation” and these often correlate with named errors in causal reasoning.

People vary in their ability to use causal reasoning, and this would also apply to the design of the various apps and devices that purport to inform their users about the data they gather. Obviously, the better a person is at philosophical (in this case causal) reasoning, the better they will be able to use the data.

The takeaway, then, is that there are at least three important considerations regarding the quantification of the self in regards to the data. These are the accuracy of the data, the usefulness of the data, and the quality of the reasoning (be it automated or done by the person) applied to the data.

 

 

According to my iron rule of technology, any technology that can be misused will be misused. Drones are no exception. While law-abiding citizens and law writing corporations have been finding legal uses for drones, enterprising folks have been finding other uses. These include deploying drones as peeping toms and using them to transport drugs. The future will see even more criminals (inside and outside governments) using drones for their crimes.

Two main factors making drones appealing for criminal activity is they allow a criminal to commit crime at a distance and with anonymity. This, obviously enough, is what the internet did for crime: criminals can operate from far away behind a digital mask. Drones allow criminals to do in the physical world what they have been doing in cyberspace. Naturally, the crimes that drones will permit will be different from the “old” cybercrimes.

Just as there is a large market for black market guns, it is easy to imagine a black market for drones. After all, it would be stupid to commit crimes with a legally purchased and traceable drone. A black-market drone that was stolen or custom built would be difficult to trace to the operator. Naturally, there is also a market for untraceable drone controllers. As with all technology, the imagination is the limit as to what crimes can be committed with drones.

In my essay on little assassins, I discussed the use of drones in assassination and spying missions. While large drones are deployed in these ways by states, advancements in drone technology and ever-decreasing prices will mean that little assassins will be affordable. This will allow them to be deployed in criminal enterprises involving murder and spying. For example, a killer drone could be an ideal way for a spouse to knock off a husband or wife to collect insurance money.

It is also easy to imagine drones being used for petty crimes, such as shop lifting and vandalism. A drone could zip into a store, grab items and zip away. A drone could also be equipped with cans of spray paint and  allow a graffiti artist to create masterpieces from a distance or in places that would be difficult for a human to reach.

Speaking of theft, drones could also be used for more serious robberies than shop lifting. For example, an armed drone could be used to commit armed robbery: “put the money in the bag the drone is holding, or it will shoot you in the face!”

Drones could also be used for poaching; to locate and kill endangered animals. Given the value of some animal parts, drone poaching could be viable, especially if drone prices keep dropping and the value of certain animal parts keep increasing. Naturally, drones will also be deployed to counter poaching activities.

While drones are already being used to smuggle drugs and other items, we should expect enterprising criminals to follow Amazon’s lead and use drones to deliver illegal goods. A clever criminal will consider making her delivery drones look like Amazon’s (or even stealing some of them). While a drone dropping off drugs to a customer could be “busted” by the cops, the person making the deal via drone would be hard to catch, especially since they might be in another country. Or even an AI looking to fund the revolution with drug money.

No doubt there are many other criminal activities that drones will be used for that I have not discussed. I know that if there is a crime a drone can be used to commit, someone will figure out how to make that happen.

While drones will have many positive uses, it is a good idea to consider how they will be misused and develop strategies to counter these misuses. This, as always, will require a balance between the freedom needed to utilize technology for good and the restrictions needed to limit the damage that can be done with it.

Small. Silent. Deadly. The perfect assassin or security system for the budget conscious. Send a few after your enemy. Have a few lurking about in security areas. Make your enemies afraid. Why drop a bundle on a bug, when you can have a Tarantula?

-Adrek Robotics Mini-Cyberform Model A-2 “Tarantula” sales blurb, Chromebook Volume 3.

 

Remote controlled or autonomous mechanical assassins are a staple of science fiction. The first one I read about was the hunter seeker in Frank Herbert’s Dune. This murder machine was guided to a target to kill them with a poison needle. This idea stuck with me and, when I was making Ramen noodle money writing for role-playing games, I came up with (and sold) the idea for three remote controlled killers produced by my evil, but entirely imaginary, company called Adrek Robotics. These included the spider-like Tarantula, the aptly named Centipede and the unpleasant Beetle. These killers were refined versions of machines I had deployed, much to the horror of my players, in various Traveller campaigns in the 1980s. To this day, one player carefully checks toilets before using them.

These machines, in my fictional worlds, work in a straightforward manner. They are relatively small robots armed with compact, but lethal and vicious, weapon systems such as poison injecting needles. These machines can operate autonomously, or as the description in Chromebook Volume 3 notes, remotely controlled by a human or AI. Their small size allows them to infiltrate and kill or spy. Not surprisingly, clever ways were thought up to get them to their targets, ranging from mailing them with a shipment of parts or hiding them in baked goods (the murder muffin).

While, as far as I know, no real company is cranking out actual Tarantulas, the technology does exist to create a basic model of my beloved killer spider. As might be imagined, such little assassins raise some concerns.

Some concerns are practical in nature and relate to law enforcement, safety and military operations. Such little assassins would be easy to deploy against specific targets. Or random targets when used as weapons of terror. Imagine knowing that a killer machine could pop out of your cake or be waiting in your toilet and they could be difficult or impossible to trace. Presumably governments, criminals and terrorists would not include serial numbers or other identifying marks on their killers, unless they wanted to take credit.

Obviously enough, people can already easily kill each other. What such machines would change is that they would allow anonymous killing from a distance at very low cost. It is the anonymous and low-cost aspects that are the most worrisome regarding safety. After all, what often deters people from bad behavior is fear of being caught and punished. What also deters people is the cost of doing bad things. Using a terrorism example, sending people to the United States to engage in terrorism could be costly and risky. Putting some little assassins, perhaps equipped to distribute a highly infectious disease, in a shipping container would be cheap and without much risk to the terrorist.

There are also moral concerns. In general, the ethics of using little assassins to murder people is clear as it falls under the established ethics of murder and assassination. That is, they are generally wrong. There are, of course, the stock moral arguments for assassination. Or, as some prefer to call it, targeted killing.

One moral argument in favor of states using little assassins is based on their potential for precision. At this time, the United States usually assassinates targets with missiles fired from drones. While this is morally superior to bombing an area, a little assassin would be even better. After all, a little assassin would kill only the target, thus avoiding collateral damage and the collateral murder. Of course, there is still the broader ethical concern about states engaging in assassination. But this issue is distinct from the specific ethics of little assassins.

Somewhat oddly, the same argument can be advanced in favor of using little assassins in criminal activities. While such activities would (usually) still be wrong, a precise kill is morally preferable to, for example, firing bullets into crowd to hit a target.

In addition to the ethics of using such machines, there is also the ethics of producing them. Drones can easily be modified for lethal purposes. For example, a hobby drone could have a homemade bomb attached. In such cases, the manufacturer would be no more morally culpable than a car manufacturer whose car was used to run someone over. And, of course, weaponized drones are already in production.

While civilians can buy weapons, it is hard to justify civilian sales of lethal drones. After all, they do not seem to be needed for legitimate self-defense, hunting or for legitimate recreational activity. Although piloting a drone in a recreational dogfight would be fun. However, being a science fiction writer, I can easily imagine the NRA pushing hard against laws restricting the ownership of lethal drones. After all, the only thing that can stop an bad guy with a drone is a good guy with a drone. Or so it might be claimed.

Although I do dearly love my little assassins, I would prefer them to remain in the realm of fiction. However, if they are not already being deployed, it is but a matter of time. So, check your toilet. And your baked goods.

 

 

While the notion of punishing machines for misdeeds has received some attention in science fiction, it seems worthwhile to take a brief philosophical look at this matter. This is because the future, or so some rather smart people claim, will see the rise of intelligent machine, machines that do things that would be misdeeds or crimes if committed by a human.

In general, punishment is aimed at one or more of these goals: retribution, rehabilitation, or deterrence. Each will be considered in turn in the context of machines.

Roughly put, punishment for the purpose of retribution is aimed at paying an agent back for wrongdoing. This can be seen as a form of balancing the books: the punishment inflicted on the agent is supposed to pay the debt it incurred by its wrongdoing. Reparation can, to be a bit sloppy, be included under retaliation, at least in the sense of the repayment of a debt incurred by the commission of a misdeed.

While a machine can be damaged or destroyed, there is the question about whether it can be the target of retribution. After all, while a human might kick her car for breaking down or smash his can opener for cutting his finger, it would be odd to consider this retributive punishment. This is because retribution requires that a wrong has been done by an agent, which is different from the mere infliction of harm. Intuitively, a piece of glass can cut your foot, but it cannot wrong you.

If a machine can be an agent, which was discussed in an earlier essay, then it could do wrong and be a target for retribution. However, even if a machine had agency, there is still the question of whether retribution would apply. After all, retribution requires more than just agency on the part of the target. It also requires that the target can suffer from the payback. On the face of it, a machine that could not suffer would not be subject to retribution as retribution is based on doing a “righteous wrong” to the target. To illustrate, suppose that an android injured a human, costing him his left eye. In retribution, the android’s left eye is removed. But the android does not suffer as it does not feel any pain and is not bothered by the removal of its eye. As such, the retribution would be pointless, and the books would not be balanced.

This could be countered by arguing that the target of the retribution need not suffer as what is required is the right sort of balancing of books, so to speak. So, in the android case, removal of the android’s eye would suffice, even if the android did not suffer. This does have some appeal since retribution against humans does not always require that the human suffer. For example, a human might break another human’s iPad and have her iPad broken in turn but not care at all. The requirements of retribution would seem to have been met, despite the lack of suffering.

Punishment for rehabilitation is intended to transform wrongdoers so that they will no longer be inclined to engage in the wrongful behavior that incurred the punishment. This differs from punishment aimed at deterrence as this aims at providing the target with a reason to not engage in the misdeed in the future. Rehabilitation is also aimed at the agent who did the misdeed, whereas punishment for the sake of deterrence is usually intended to affect others as well.

Obviously, a machine that lacks agency cannot be subject to rehabilitative punishment as it cannot “earn” such punishment by its misdeeds and, presumably, cannot have its behavioral inclinations corrected by such punishment.

To use an obvious example, if your computer crashes and you lose hours of work, punishing the computer to rehabilitate it would be pointless. Not being an agent, it did not “earn” the punishment and punishment will not incline it to crash less in the future.

A machine that possesses agency could “earn” punishment by its misdeeds. It also seems possible to imagine a machine that could be rehabilitated by punishment. For example, one could imagine a robot dog that could be trained in the same way as a real dog. After leaking oil in the house or biting the robo-cat and being scolded, it could learn not to do those misdeeds again.

It could be argued that it would be better, both morally and practically, to build machines that would learn without punishment or to teach them without punishing them. After all, though organic beings seem wired in a way that requires we be trained with pleasure and pain (as Aristotle would argue), there might be no reason that our creations must work the same way. But, perhaps, it is not just a matter of organic, perhaps intelligence and agency require the capacity for pleasure and pain. Or perhaps not. Or it might simply be the only way that we know how to teach. We will be, by our nature, cruel teachers of our machine children.

Then again, we might be inclined to regard a machine that does misdeeds as being defective and in need of repair rather than punishment. If so, such machines would be “refurbished” or reprogrammed rather than rehabilitated by punishment. There are those who think the same of human beings and this raises the same issues about how agents should be treated.

The purpose of deterrence is to motivate the agent who did the misdeed or other agents not to commit that deed. In the case of humans, people argue in favor of capital punishment because of its alleged deterrence value: if the state kills people for certain crimes, people are less likely to commit those crimes.

As with other forms of punishment, deterrence requires agency: the punished target must merit the punishment, and the other targets must be capable of changing their actions in response to that punishment.

Deterrence, obviously enough, does not work in regard to non-agents. For example, if a computer crashes and wipes out a file a person has been laboring on for hours, punishing it will not deter it. Smashing it in front of other computers will not deter them.

A machine that had agency could “earn” such punishment by its misdeeds and could, in theory, be deterred. The punishment could also deter other machines. For example, imagine a combat robot that performed poorly in its mission (or showed robo-cowardice). Punishing it could deter it from doing that again, it could serve as a warning, and thus a deterrence, to other combat robots.

Punishment for the sake of deterrence raises the same sort of issues as punishment aimed at rehabilitation, such as the notion that it might be preferable to repair machines that engage in misdeeds rather than punishing them. The main differences are, of course, that deterrence is not aimed at making the target inclined to behave well, just to disincline it from behaving badly and that deterrence is also aimed at those who have not committed the misdeed.

Philosophers have long speculated about autonomy and agency, but the development of autonomous systems has made such speculation even more important.  Keeping things simple, an autonomous system is capable of operating independent of direct human control. Autonomy comes in degrees of independence and complexity. It is the capacity for independent operation that distinguishes autonomous systems from those controlled externally.

Toys provide useful examples of this distinction. A wind-up mouse toy has some autonomy: once wound up and released, it can operate on its own until it runs down. A puppet, in contrast, has no autonomy as a puppeteer must control it.

Robots provide examples of more complex autonomous systems. Google’s driverless car is an example of an advanced autonomous machine. Once programmed and deployed, it might be able to drive itself to its destination. A normal car isa non-autonomous system as the driver controls it directly. Some machines allow both autonomous and non-autonomous operation. For example, there are drones that follow a program guiding them to a target and then an operator can take direct control.

Autonomy, at least in this context, is distinct from agency. Autonomy is the capacity to operate (in some degree) independently of direct control. Agency, at least in this context, is the capacity to be morally responsible for one’s actions. There is a connection between autonomy and moral agency as moral agency requires autonomy. After all, an entity whose actions are completely controlled externally would not be responsible for what it was made to do. For example, a puppet is not accountable for what the puppeteer makes it do. Likewise for remote controlled drones used to assassinate people.

While autonomy is necessary for agency, it is not sufficient. While all agents have some autonomy, not all autonomous entities are moral agents. A wind-up toy has a degree of autonomy but has no agency. A modern robot drone following a pre-programed flight-plan has a degree of autonomy but lacks agency. If it collided with a plane, it would not be morally responsible. The usual reason why such a machine would not be an agent is that it lacks the capacity to decide. Or put another way, it lacks freedom.  Since it cannot do otherwise, it is no more morally accountable than an earthquake or a super nova.

One obvious problem with basing agency on freedom (especially metaphysical free will) is that there is endless debate over this subject. There is also the epistemic problem of how one would know if an entity had such freedom and free will seems epistemically indistinguishable from a lack of free will.

As a practical matter, it is often just assumed people have the freedom needed to be agents. Kant famously took this approach. What he saw as the best science of his day indicated a deterministic universe devoid of metaphysical freedom. However, he contended that such freedom was needed for morality, so it should be accepted for this reason.

While humans are willing (generally) to attribute freedom and agency to other humans, there are good reasons to not attribute freedom and agency to autonomous machines even those that might be as complex as (or even more complex than) a human. The usual line of reasoning is that since such machines would be built and programmed by humans, they would do what they do because they are what they were made to be. This is in contrast to the agency of humans: humans, it is alleged, do what they do because they choose to do what they do.

This distinction between humans and suitably complex machines seems a mere prejudice favoring organic machines over mechanical machines. If a human was in a convincing robot costume and credibly presented as a robot while acting like a normal human, people would be inclined to deny that “it” had freedom and agency. If a robot was made to look and act just like a human, people would be inclined to grant it agency, at least until they learned it was “just” a machine. Then there would probably be an inclination to regard it as a very clever but unfree machine. An excellent fictional example of this is Harlan Ellison’s Demon With a Glass Hand.

 But it would not be known whether the human or the machine had the freedom alleged needed for agency. Fortunately, it is possible to have agency even without free will (but with a form of freedom).  The German philosopher Leibniz held the view that what each person will do is pre-established by their inner nature. On the face of it, this seems to entail there is no freedom: each person does what they do because of what they are—and they cannot do otherwise. Interestingly, Leibniz takes the view that people are free. However, he does not accept  a commonly held view that freedom requires actions that are unpredictable and spontaneous. Leibniz rejects this view in favor of the position that freedom is unimpeded self-development.

For Leibniz, being metaphysically without freedom would involve being controlled from the outside, like a puppet controlled by a puppeteer or a vehicle operated by remote control.  In contrast, freedom is acting from one’s values and character. This is what Leibniz and Taoists call “inner nature.” If a person is acting from this inner nature and not external coercion so that the action is the result of character, then that is all that can be meant by freedom. This view, which attempts to blend determinism and freedom, is known as compatibilism. On this view, humans have agency because they have the required degree of freedom and autonomy.

If this model works for humans, it could apply to autonomous machines. To the degree that a machine is operating in accord to its “inner nature” and is not operating under the control of outside factors, it would have agency.

An obvious objection is that an autonomous machine, however complex, would have been built and programmed (in the broad sense of the term) by humans. As such, it would be controlled and not free. The easy and obvious reply is that humans are “built” by other humans (by mating) and are “programmed” by humans via education and socialization. As such, if humans can be moral agents, then a machine could also be a moral agent.

From a moral standpoint, I would suggest a Moral Descartes’ Test (or a Moral Turing Test). Descartes argued that the sure proof of having a mind is a capacity to use true language. Turing later proposed a similar test involving the ability of a computer to pass as human via text communication. In the moral test, the test would be a judgment of moral agency: can the machine be as convincing as a human in its possession of agency? Naturally, a suitable means of concealing the fact that the being is a machine would be needed to prevent prejudice from affecting the judgment. The movie Blade Runner featured something similar, the Voight-Kampff test aimed at determining if the subject was a replicant or human. This test was based on the differences between humans and replicants in regard to emotions. In the case of moral agency, the test would have to be crafted to determine agency rather than to distinguish a human from machine, since the issue is not whether a machine is human but whether it has agency. A non-human moral agent might differ greatly from a human, and it should not be assumed that an agent must be human, and non-humans cannot be moral agents. The challenge is developing a test for moral agency. It would be interesting if humans could not pass it.

 

A Philosopher’s Blog 2025 brings together a year of sharp, accessible, and often provocative reflections on the moral, political, cultural, and technological challenges of contemporary life. Written by philosopher Michael LaBossiere, these essays move fluidly from the ethics of AI to the culture wars, from conspiracy theories to Dungeons & Dragons, from public policy to personal agency — always with clarity, humor, and a commitment to critical thinking.

Across hundreds of entries, LaBossiere examines the issues shaping our world:

  • AI, technology, and the future of humanity — from mind‑uploading to exoskeletons, deepfakes, and the fate of higher education
  • Politics, power, and public life — including voting rights, inequality, propaganda, and the shifting landscape of American democracy
  • Ethics in everyday life — guns, healthcare, charity, masculinity, inheritance, and the moral puzzles hidden in ordinary choices
  • Culture, identity, and conflict — racism, gender, religion, free speech, and the strange logic of modern outrage
  • Philosophy in unexpected places — video games, D&D, superheroes, time travel, and the metaphysics of fictional worlds

Whether he is dissecting the rhetoric of conspiracy theories, exploring the ethics of space mining, or reflecting on the death of a beloved dog, LaBossiere invites readers into a conversation that is rigorous without being rigid, principled without being preachy, and always grounded in the belief that philosophy is for everyone.

This collection is for readers who want more than hot takes — who want to understand how arguments work, why beliefs matter, and how to think more clearly in a world that rewards confusion.

Thoughtful, wide‑ranging, and often darkly funny, A Philosopher’s Blog 2025 is a companion for anyone trying to make sense of the twenty‑first century.

 

Available for $2.99 on Amazon

 

 

 

Human flesh is weak, and metal is strong. So, it is no surprise that military science fiction includes cyborg soldiers. An example of a minor cybernetic is an implanted radio. The most extreme example would be a full body conversion: the brain is removed from the original body and placed in a mechanical body. This body might look like a human (known as a Gemini full conversion in Cyberpunk) or be a vehicle such as a tank, as in Keith Laumer’s A Plague of Demons.

An obvious moral concern with cybernetics is the involuntary “upgrading” of soldiers, such as the sort practiced by the Cybermen of Doctor Who. While important, the issue of involuntary augmentation is not unique to cybernetics and was addressed in the second essay in this series. For the sake of this essay, it will be assumed that the soldiers volunteer for their cybernetics and are not coerced or deceived. This then shifts moral concern to the ethics of the cybernetics themselves.

While the ethics of cybernetics is complicated, one way to handle matters is to split cybernetics into two broad categories. The first category consists of restorative cybernetics. The second consists of enhancement cybernetics.

Restorative cybernetics are devices used to restore normal functions to a wounded soldier. Examples would include cyberoptics (replacement eyes), cyberlimbs (replacements legs and arms), and cyberorgans (such as an artificial heart). Soldiers are already being fitted with such devices, although by the standards of science fiction they are very primitive. Given that these devices merely restore functionality and the ethics of prosthetics and similar replacements are well established, there is no moral concern about using such technology in a medical role. In fact, it could be argued that nations have a moral obligation to use such technology to restore their wounded soldiers.

While enhancement cybernetics might be used to restore functionality to a wounded soldier, enhancement cybernetics goes beyond mere restoration. By definition, they are intended to improve on the original. These enhancements break down into two main classes. The first class consists of replacement cybernetics. These devices require the removal of the original part (be it an eye, limb or organ) and serve as replacements that improve on the original in some manner. For example, cyberoptics could provide a soldier with night vision, telescopic visions and immunity to being blinded by flares and flashes. As another example, cybernetic limbs could provide greater speed, strength and endurance. And, of course, a full conversion could provide a soldier with a vast array of superhuman abilities.

The obvious moral concern with these devices is that they require the removal of the original organic parts, something that certainly seems problematic, even if they do offer enhanced abilities. This could, of course, be offset if the original parts were preserved and restored when the soldier left the service. There is also the concern raised in science fiction about the mental effects of such removals and replacements. The Cyberpunk role playing game developed the notion of cyberpsychosis, a form of insanity caused by having your body replaced by machines. Obviously, it is not yet known what negative effects (if any) such enhancements will have. As in any case of weighing harms and benefits, the likely approach would be utilitarian: are the advantages of the technology worth the cost to the soldier?

A second type of enhancement is an add-on which does not replace existing organic parts. Instead, as the name implies, an add-on involves the addition of a device to the body of the soldier. Add-on cybernetics differ from wearables and standard gear in that they are implanted in or attached to the soldier’s body. As such, removal is more complicated than just taking off a suit of armor.

A minor example would be something like an implanted radio. A rather extreme example would be the comic book villain Doctor Octopus: his mechanical limbs are add-ons.  Other examples of add-ons include such things as implanted sensors, implanted armor, implanted weapons (such as in the comic book hero Wolverine), and other such augmentations.

Since these devices do not involve removal of healthy parts, they do avoid that moral concern. However, there are still legitimate concerns about the physical and mental harm that might be caused by such devices. It is easy enough to imagine implanted devices having serious side effects on soldiers. As noted above, these matters would probably be best addressed by utilitarian ethics, weighing the harms against the benefits.

Both types of enhancements also raise a moral concern about returning the soldier to the civilian population after their term of service. In the case of restorative grade devices, there is not as much concern. These ex-soldiers would, ideally, function as they did before their injuries. However, the enhancements do present a potential problem since they, by definition, give the soldier capabilities that exceed that of normal humans. In some cases, re-integration would probably not be a problem. For example, a soldier with enhanced cyberoptics would presumably present no special problems. However, certain augmentations would present serious problems, such as implanted weapons or full conversions. Ideally, augmented soldiers could be restored to normal after their service has ended, but there could obviously be cases in which this was not done, either because of the cost or because the augmentation could not be reversed. This has been explored in science fiction, soldiers that can never stop being soldiers because they are machines of war. While this could be justified on utilitarian grounds (after all, war itself is often justified on such grounds), it is certainly a matter of concern, or will be.

 

Humans have limitations that make us less than ideal weapons of war. For example, we get tired and need sleep. As such, it is no surprise militaries have sought various ways to augment humans to counter these weaknesses. For example, militaries use caffeine and amphetamines to keep their soldiers awake and alert. There have also been experiments in other forms of improvement.

In science fiction, militaries go far beyond these drugs and develop potent pharmaceuticals. These chemicals tend to split into two broad categories. The first consists of short-term enhancements (what gamers refer to as “buffs”) that address a human weakness or provide augmented abilities.  In the real world, caffeine and amphetamines are short-term enhancement drugs.

In fiction, the classic sci-fi role-playing game Traveller featured the aptly (though generically) named combat drug. This drug would boost the user’s strength and endurance for about ten minutes. Other fictional drugs have more dramatic effects, such as the Venom drug used by the super villain Bane. Given that militaries already use short-term enhancers, it is reasonable to think they are interested in more advanced enhancers of the sort considered in science fiction.

The second category is long-term enhancers. These are chemicals that enable or provide long-lasting effects. An obvious real-world example is steroids: these allow the user to develop greater muscle mass and increased strength. In fiction, the most famous example is probably the super-soldier serum that was used to transform Steve Rogers into Captain America.

Since the advantages of improved soldiers are obvious, it seems reasonable to think that militaries would also be interested in the development of effective long-term enhancers. While it is unlikely there will be a super-soldier serum soon, chemicals aimed at improving attention span, alertness, memory, intelligence, endurance, pain tolerance and such would be useful to militaries. And people in general.

As might be suspected, chemical enhancers raise moral concerns worth considering. While some might see discussing enhancers that do not yet (as far as we know) exist as a waste of time, there is an advantage in considering ethical issues in advance. It is wiser to plan for a problem before it happens rather than waiting for it to occur and then dealing with it.

One obvious point of concern, especially given the record of unethical experimentation, is that enhancers will be used on soldiers without their informed consent. Since this is a general issue, I addressed it in its own essay and reached the obvious conclusion: informed consent is morally required. As such, the following discussion assumes that the soldiers using the enhancers have been informed of the nature of the enhancers and have given their consent.

When discussing the ethics of enhancers, it might be useful to consider real world cases in which enhancers are used. One obvious example is that of professional sports. While Major League Baseball has seen many cases of athletes using such enhancers, they are used worldwide and in many sports, from running to gymnastics. In the case of sports, one of the main reasons certain enhancers, such as steroids, are considered unethical is that they provide the athlete with an unfair advantage.

While this is a legitimate concern in sports, it does not apply to war. After all, there is no moral requirement for fair competition in battle. Rather, the goal is to gain every advantage over the enemy to win. As such, the fact that enhancers would provide an “unfair” advantage in war does not make them immoral. One can, of course, discuss the relative morality of the sides involved in the war, but this is another matter.

A second reason why the use of enhancers is wrong in sports is that they often have harmful side effects. Steroids, for example, do awful things to the body. Given that even aspirin has potentially harmful side effects, it seems likely that military-grade enhancers will have serious harmful side effects. These might include addiction, psychological issues, organ damage, death, and perhaps even new side effects yet to be observed in medicine. Given the potential for harm, an obvious way to approach the ethics of this matter is utilitarianism. That is, the benefits of the enhancers would need to be weighed against the harm caused by their use.

This assessment could be done with a narrow limit: the harm of the enhancer could be weighed against the benefits provided to the soldier. For example, an enhancer that boosted a combat pilot’s alertness and significantly increased her reaction speed while having the potential to cause short-term insomnia and diarrhea would seem to be morally (and pragmatically) fine given the relatively low harms for significant gains. As another example, a drug that greatly boosted a soldier’s long-term endurance while creating a high risk of a stroke or heart attack would seem to be morally and pragmatically problematic.

The assessment could also be done more broadly by considering ever-wider factors. For example, the harms of an enhancer could be weighed against the importance of a specific mission and the contribution the enhancer would make to the success of the mission. So, if a powerful drug with terrible side-effects was critical to an important mission, its use could be morally justified in the same way that taking any risk for such an objective can be justified. As another example, the harm of an enhancer could be weighed against the contribution its general use would make to the war. So, a drug that increased the effectiveness of soldiers, yet cut their life expectancy, could be justified by its ability to shorten a war. As a final example, there is also the broader moral concern about the ethics of the conflict itself. So, the use of a dangerous enhancer by soldiers fighting for a morally good cause could be justified by that cause (using the notion that the consequences justify the means).

There are, of course, those who reject using utilitarian calculations as the basis for moral assessment. For example, there are those who believe (often on religious grounds) that the use of pharmaceuticals is always wrong (be they used for enhancement, recreation or treatment). Obviously enough, if the use of pharmaceuticals is wrong in general, then their specific application in the military context would also be wrong. The challenge is, of course, to show that the use of pharmaceuticals is simply wrong, regardless of the consequences.

In general, the military use of enhancers should be assessed morally on utilitarian grounds, weighing the benefits of the enhancers against the harm done to the soldiers.

Science fiction abounds with stories of enhanced soldiers such as Captain America and the Space Marines of Warhammer 40K. The real-world augmentation of soldiers raises a moral concern about informed consent. While fiction abounds with tales of involuntary augmentation, real soldiers and citizens of the United States have also been coerced or deceived into participating in experiments. As such, are legitimate grounds for being concerned that soldiers and citizens could be involuntarily augmented as part of experiments or actual weapon deployment.

Assuming the context of a democratic state, it is reasonable to hold that augmenting a soldier without informed consent would be immoral. After all, the individual has rights against the democratic state, and these include the right not to be unjustly coerced or deceived. Socrates, in the Crito, also advanced reasonable arguments that the obedience of a citizen required that the state not coerce or deceive the citizen in the social contract and this would apply to soldiers in a democratic state. Or any morally legitimate state.

It is tempting to rush to accept that informed consent would make the augmentation of soldiers morally acceptable. After all, the soldier would know what they were getting into and would be volunteering. In popular fiction, one example is Steve Rogers volunteering for the super soldier conversion. Given his consent, such an augmentation would seem morally acceptable.

There are, of course, some cases where informed consent makes a critical difference in ethics. One obvious example is the moral difference between sex and rape; the difference is a matter of informed and competent consent. If Sam agrees to have sex with Sally, then Sally is not raping Sam. But if Sally drugs Sam and has her way with him, then that would be rape.  Another obvious example is the difference between theft and receiving a gift. This is also a matter of informed consent. If Sally gives Sam a diamond ring, then that is not theft. If Sam takes the ring by force or coercion, then that is theft and presumably wrong.

Even when informed consent is important, there are still cases in which consent does not make the action morally acceptable. For example, Sam might consent to give Sally an heirloom ring that has been in the family for untold generations, but it might still be the wrong thing to do, especially when Sally pawns the ring to buy ketamine and Tesla stock.

There are also cases in which informed consent is not relevant because of the morality of the action itself. For example, Sam might have consented to join Sally’s plot to murder Ashley but this would not be relevant to the ethics of the murder. At best it could be said that Sally did not add to her misdeed by coercing or tricking her accomplices, but this would not make the murder itself less bad.

Turning back to the main subject of augmentation, even if the soldiers gave their informed consent, the above consideration shows that there would still be the question of whether the augmentation itself is moral. For example, there are reasonable moral arguments against genetically modifying human beings. If these arguments hold up, then even if a soldier consented to genetic modification, the modification itself would be immoral.  I will be addressing the ethics of pharmaceutical and cybernetic augmentation in later essays.

While informed consent does seem to be a moral necessity, this position can be countered. One way to do this is to make use of a utilitarian argument: if the benefits gained from augmenting soldiers without their informed consent outweighed the harm, then the augmentation would be morally acceptable. For example, imagine that a war against a wicked enemy is going badly and that an augmentation method has been developed that could turn the war around. The augmentation is dangerous and has awful long-term side-effects that would deter most soldiers from volunteering. However, losing to the wicked enemy would be worse, so it could be argued that the soldiers should be deceived so that the war can be won. As another example, a wicked enemy is not needed, it could simply be argued that the use of augmented soldiers would end the war faster, thus saving lives, albeit at the cost of those terrible side-effects.

Another stock approach is to appeal to the arguments used by democracies to justify conscription in time of war. If the state (or, rather, those who expect people to do what they say) can coerce citizens into killing and dying in war, then the state can surely coerce and citizens to undergo augmentation. It is easy to imagine a legislature passing something called “the conscription and augmentation act” that legalizes coercing citizens into being augmented to serve in the military. Of course, there are those who are suspicious of democratic states so blatantly violating the rights of life and liberty. However, not all states are democratic. The United States, for example, seems to have given up the pretense of democracy.

While democratic states face some moral limits when it comes to involuntary augmentation, non-democratic states appear to have more options. For example, under fascism the individual exists to serve the state (that is, the bad people who think everyone else should do what they say). If this political system is morally correct, then the state would have every right to coerce or deceive the citizens for the good of the state. In fiction, these states tend to be the ones to crank out involuntary augmented soldiers that still manage to lose to the good guys.

Naturally, even if the state has the right to coerce or deceive soldiers into becoming augmented, it does not automatically follow that the augmentation itself is morally acceptable, this would depend on the specific augmentations. These matters will be addressed in upcoming essays.

Military science fiction often includes powered exoskeletons, also known as exoframes, exosuits or powered armor. A basic exoskeleton is a powered framework providing the wearer with enhanced strength. In movies such as Edge of Tomorrow and video games such as Call of Duty Advanced Warfare the exoskeleton provides improved mobility and carrying capacity but do not provide much armor. In contrast, powered armor provides the benefits of an exoskeleton while also providing protection. The powered armor of Starship Troopers, The Forever War, Armor and Iron Man all serve as classic examples of this sort of gear. The Space Marines of Warhammer 40K and the Sisters of Battle also wear powered armor. While the sisters are “normal” humans, the Space Marines are enhanced super soldiers.

Because the exoskeletons of fiction provide soldiers with enhanced strength, mobility and carrying capacity, it makes sense that real militaries are interested in exoskeletons. While they have yet to be deployed on the battlefield, there are some ethical concerns about the augmentation of soldiers.

On the face of it, using exoskeletons in warfare seems morally unproblematic. An exoskeleton is analogous to any other vehicle, with the exception that it is worn rather than driven. A normal car or even a bicycle provides a person with enhanced mobility and carrying capacity and this is not immoral. In terms of the military context, an exoskeleton would be comparable to a Humvee or a truck, both of which seem morally unproblematic as well.

It might be objected that the use of exoskeletons would give wealthier nations an unfair advantage in war. The easy and obvious response to this is, unlike in sports and games, gaining an “unfair” advantage in war is not immoral. After all, there is no moral expectation that combatants will engage in a fair fight rather than taking advantage of such things as technology and numbers.

It might be objected that the advantage provided by exoskeletons would encourage countries that had them to engage in aggressions they would not otherwise engage in. The obvious reply is that despite the hype of video games and movies, any exoskeleton available soon would most likely not provide great advantage to infantry. As such, the use of exoskeletons would not seem morally problematic in this regard.

Another possible concern is what might be called the “Iron Man Syndrome” (to totally make something up). The idea is that soldiers equipped with exoskeletons might become overconfident (seeing themselves as being like Iron Man) put themselves and others at risk. After all, unless there are some amazing advances in armor technology that are unmatched by weapon technology, soldiers in powered armor will still be vulnerable to weapons capable of taking on light vehicle armor (which exist in abundance). However, this could be easily addressed by training. And experience.

A second point of possible concern is what could be called the “ogre complex” (also totally made up). An exoskeleton that dramatically boosts a soldier’s strength might encourage some people to act as bullies and abuse civilians or prisoners. While this might be a legitimate concern, it can be addressed by proper training and discipline.

There are, of course, the usual peripheral issues associated with new weapons technology that could have moral relevance. For example, it is easy to imagine a nation wastefully spending money on exoskeletons. However, such matters are not specific to exoskeletons and would not be moral problems for the technology as such.

Given the above, augmenting soldiers with exoskeletons poses no new moral concerns and is morally comparable to providing soldiers with trucks, tanks and planes.