Anyone who has played RTS games such as Blizzard’s Starcraft knows the basics of swarm warfare: you build a swarm of cheap units and hurl them against the enemy’s smaller force of more expensive units. The plan is that although the swarm will be decimated, the enemy will be exterminated. The same tactic is also used in the classic tabletop game Ogre. It pitted a lone intelligent super tank against a large force of human infantry and armor. And, of course, the real world has many examples of swarm warfare with some successful for those using the swarm tactic (ants taking out a larger foe) and some proving disastrous (massed infantry attacks on machineguns in WWI).

A modern approach to swarm tactics is to build a swarm of drones and deploy them against the enemy. While such drones will tend to be airborne units, they could also be ground or sea machines. In terms of their attacks, there are many options. The drones could be large enough to be equipped with weapons, such as small caliber guns, that would allow them to engage and return to reload for future battles. Some might be equipped with melee weapons, poisons, or biological weapons. The drones could also be suicide machines, small missiles intended to damage the enemy by destroying themselves.

While the development of military drone swarms in the United States will fall within the usual high cost of developing new weapon technology, the drones themselves can be cheap. After all, they will tend to be much smaller and simpler than crewed weapons such as aircraft, ships and ground vehicles. The main cost will most likely be in developing the software to make the drones operate effectively in a swarm; but after that it will be just a matter of mass producing the hardware.

If effective software and cost-effective hardware can be developed, one of the main advantages of the battle swarm will be its low cost. While such low-cost warfare might be problematic for defense contractors who have grown accustomed to profitable contracts, it is appealing to those concerned about costs and reducing government spending. After all, if low-cost drones could replace expensive units, defenses expenses could be significantly reduced. The savings could be used for social programs or, more likely, more tax cuts for the wealthy.

Low-cost units, if effective, can confer an attrition advantage. If, for example, it costs you $12,000 in drones to take down the enemy’s $12,000,000 fighter jet, then you stand a decent chance of winning. If hundreds of dollars of drones can take down millions of dollars of aircraft, then the situation is even better for the side with the drones. Likewise for naval vessels, land vehicles and structures.

The low cost does raise some concerns, though. Once the drone controlling software makes its way out into the world (via the inevitable hack, theft, or sale), then everyone could use swarms. This will recreate the IED and suicide bomber situation, only at an exponential increase. Instead of IEDs in the road, they will be flying around cities, looking for targets. Instead of a few suicide bombers with vests, there will be swarms of drones loaded with explosives. Since Uber comparisons are now mandatory, the swarm will be the Uber of death.

This does raise moral concerns about the development of drone software and technology; but the easy and obvious reply is that there is nothing new about this situation: every weapon ever developed eventually gets around. As such, the usual ethics of weapon development applies here, with due emphasis on the possibility of providing another cheap and effective way to destroy and kill.

One short term advantage of the first swarms is that they will be facing weapons designed primarily to engage small numbers of high value targets. For example, air defense systems now consist mostly of expensive missiles designed to destroy very expensive aircraft. Firing a standard anti-aircraft missile into a swarm will destroy some of the drones (assuming the missile detonates), but enough of the swarm will probably survive the attack for it to remain effective. It is also likely that the weapons used to defend against the drones will cost more than the drones, which ties back into the cost advantage.

This advantage of the drones would be quickly lost if effective anti-swarm weapons were developed. Not surprisingly, gamers have already worked out effective responses to swarms. In D&D and Pathfinder players generally loath swarms for the same reason that ill-prepared militaries will loath drone swarms: while individual swarm members are easy to kill, it is difficult to kill enough of them with standard weapons. In games, players respond to swarms with area of effect attacks, such as fireballs (or running away). These sorts of attacks can consume the entire swarm and either eliminate it or reduce its numbers, so it is no longer a threat. While the real world has an unfortunate lack of wizards, the same idea will work against drone swarms: cheap weapons that do moderate damage over a large area. One possible weapon is a battery of large, automatic shotguns that fill the sky with pellets or flechettes. Missiles could also be designed that act like claymore mines in the sky, spraying ball bearings in almost all directions.  And, obviously enough, swarms will be countered by swarms.

The drones would also be subject to electronic warfare. If they are being remotely controlled, this connection could be disrupted. Autonomous drones would be  less vulnerable, but they would still need to coordinate with each other to remain a swarm, and this coordination could be targeted.

The practical challenge would be to make the defenses cheap enough to make them cost effective. Then again, countries whose ruling class is happy to burn money for expensive weapon systems would not need to worry about the costs. In fact, defense contractors will presumably be lobbying for expensive swarm and anti-swarm systems.

The swarms also inherit  existing moral concerns about non-swarm drones, be they controlled by humans or deployed as autonomous killing machines. The ethical problems of swarms controlled by a human operator would be the same as the ethical problems of a single drone controlled by a human, the difference in numbers does not make a moral difference. For example, if drone assassination with a single drone is wrong (or right), then drone assassination with a swarm would also be wrong (or right).

Likewise, an autonomous swarm is not morally different from a single autonomous unit in terms of the ethics of the situation.  For example, if deploying a single autonomous killbot is wrong (or right), then deploying an autonomous killbot swarm is wrong (or right).  That said, perhaps there is a greater chance that an autonomous killbot swarm will develop a rogue hive mind and turn against us. Or perhaps not. In any case, Will Rodgers will be proven right once again: “You can’t say that civilization don’t advance, however, for in every war they kill you in a new way.”

In my previous essay, I discussed some possible motivations for groping in VR games, which is now a thing. The focus of what follows is on the matter of protecting gamers from such harassment on the new frontiers of gaming.

Since virtual groping is a paradigm of a first world problem, it might be objected that addressing it is a waste of time. After all, the objection can be made that resources that might be expended on combating virtual groping should be spent on addressing real groping After all, a real grope is far worse than a virtual grope and virtual gropes can be avoided by simply remaining outside of the virtual worlds.

This objection has some merits. After all, it is sensible to address problems in order of their seriousness. To use an analogy, if a car is skidding out of control at the same time an awful song comes on the radio, then the driver should focus on getting the car back under control and not wasting time on the radio.  Unless, of course, it is “The Most Unwanted Song.”

The reasonable reply to this objection is that this is not a situation where there is only one option. While time spent addressing virtual groping is time not spent on addressing real groping, addressing virtual groping does not preclude addressing real groping. Also, pushing this sort of objection can easily lead to absurdity: for anything a person is doing, there is almost certainly something else they could be doing that would have better moral consequences. For example, a person who spends time and money watching a movie could use that time and money to address a real problem, such as crime or drug addiction. But, as so often been argued, this would impose unreasonable expectations on people and would ultimately create more harm than good. As such, while I accept that real groping is worse than virtual groping, I am not failing morally by taking time to address the virtual rather than the real in this essay.

It could also be objected that there is no legitimate reason to be worried about virtual groping on the obvious grounds that it is virtual rather than real. After all, when people play video games, they routinely engage in virtual violence against each other. Yet this is not seen as a special problem (although virtual violence does have its critics). Put roughly, if it is fine to shoot another player in a game (virtual killing) it should be equally fine to grope another player in a game. Neither the killing nor groping are real and hence should not be taken seriously.

This objection does have some merit but can be countered by considering an analogy to sports. When people are competing in boxing or martial arts, they hit each other, and this is accepted because it is the purpose of the sport. However, it is not acceptable for a competitor to start pawing away at their opponent’s groin in a sexual manner. Punching is part of the sport, groping is not. The same holds for video games. If a person is playing a combat video game that pits players against each other, the expectation is that they will be subject to virtual violence. They know this and consent to it by playing, just as boxers know they will be punched and consent to it. But, unless the players know and consent to playing a groping game, using the game mechanics to virtually grope other players would not be acceptable as they did not agree to that game.

Another counter is that while virtual groping is not as bad as real groping, it can still harm the target of the groping. To use an analogy, being verbally abused over game chat is not as bad as having a person physically present engaging in such abuse, but it is still unpleasant. Virtual groping is a form of non-verbal harassment, intended to get a negative reaction from the target and to make the gaming experience unpleasant. There is also the fact that being the victim of such harassment can rob a player of the enjoyment of the game. And enjoyment is the point of playing. While it is not as bad as groping a player in a real-world game (which would be sexual assault), it has an analogous effect on the player’s experience.

It could be replied that a player should just be tough and put up with the abuse. This reply lacks merit and is analogous to saying that people should just put up with being assaulted, robbed or spit on. It is the reply of an abuser who wants to continue the abuse while shifting blame onto the target.

While players are in the wrong when they engage in virtual groping, there is the question of what gaming companies should do to protect their customers from such harassment. They do have a practical reason to address this concern as players will avoid games where they are subject to harassment and abuse, thus costing the company money. They also have a moral obligation, analogous to the obligation of those in the real world who host an event. For example, a casino that allowed players to grope others with impunity would be failing in its obligation to its customers; the same would seem to hold for a gaming company operating a VR game.

Companies do already operate various forms of reporting, although their enforcement tends to vary. Blizzard, for example, has policies about how players should treat each other in World of Warcraft. This same approach can and certainly will be applied to VR games that allow a broader range of harassment, such as virtual groping.

Because of factors such as controller limitations, most video games do not have the mechanics that would allow much in the way of groping although some players work very hard trying to make that happen. While non-VR video games could support things like glove style controllers that would allow groping, VR games are far more likely to support controllers that would allow players to engage in virtual groping behavior (something that has, as noted above, already occurred).

Eliminating such controller options would help prevent VR groping, but at the cost of taking away a rather interesting and useful aspect of VR controller systems. As such, this is not a very viable option. A better approach would be to put in the software limits on how players can interact with the virtual bodies of other players. While some might suggest a punitive system for when one player’s virtual hands (or groin) contacting another player’s virtual naught bits, the obvious problem is that wily gamers would exploit this. For example, if a virtual hand contacting a virtual groin caused the character damage or filed an automatic report, then some players would be trying their best to get their virtual groins in contact with other players’ virtual hands. As such, this would be a bad idea.

A better, but less than ideal system, would be to have a personal space zone around each player’s VR body to keep other players at a distance. The challenge would be working this effectively into game mechanics, especially for such things as hand-to-hand combat. It might also be possible to have the software recognize and prevent harassing behavior. So, for example, a player could virtually punch another player but not make grabbing motions on the target’s groin.

It should be noted that these concerns are about contexts in which players do not want to be groped; I have no moral objection to VR applications that allow consensual groping which, I infer, will be popular.

On the positive side, online gaming allows interaction with gamers all over the world. On the negative side, some gamers are horrible. While I have been a gamer since the days of Pong, one of my early introductions to “the horrible” was on Xbox live. In a moment of deranged optimism, I hoped that chat would allow me to plan strategy with my team members and perhaps make new gamer friends. While this did sometimes happen, the usual experience was an unrelenting spew of insults and threats between gamers. I solved this problem by clipping the wire on a damaged Xbox headset and sticking the audio plug into my controller; the spew continued but had nowhere to go.

There is an iron law of technology that any technology that can be misused will be misused. There are also specific laws that fall under this general law. One is the iron law of gaming harassment: any gaming medium that allows harassment will be used to harass. While there have been many failed attempts at virtual reality gaming, it seems that it might become the new gaming medium. Then again, VR might be analogous to fusion power: it is the gaming tech of the future and always will be. In any case, harassment in online VR games is already a thing. Just as VR is supposed to add a new level to gaming, it also adds a new level to harassment, such as virtual groping. This is an escalation over the harassment options available in most games. Non-VR games are typical limited to verbal harassment and some action harassment, such as tea bagging. For those not familiar with this practice, it is when one player causes their character to rapidly repeat crouch on top of a dead character. The idea is that the player is repeatedly slapping their virtual testicles against the virtual corpse of a foe. This presumably demonstrates contempt for the opponent and dominance on the part of the bagger.

Being a gamer and a philosopher, I do wonder a bit about the motivations of those that engage in harassment and how their motivation impacts the ethics of their behavior. While I will not offer a detailed definition of harassment, the basic idea is that it requires sustained abuse. This is to distinguish it from a quick expression of anger.

In some cases, harassment seems to be motivated by the enjoyment the harasser gets from the response from their target. The harasser is not operating from a specific value system that leads them to attack certain people; they are equal opportunity in their attacks. Back when I listened to what other gamers said, it was easy to spot this sort of person. They would go after everyone and tailor their spew based on what they believed about the target’s identity. As an example, if the harasser though their target was African American, they would spew racist comments. As another example, if the target was the then exceedingly rare female gamer, they would spew sexist remarks. As a third example, if the target was believed to be a white guy, the attack would usually involve comments about the guy’s mother or assertions that the target is homosexual.

While the above focuses on what a person says, the discussion also applies to the virtual actions in the game. As noted above, some gamers engage in tea-bagging because that is the worst gesture they can make in the game. In games that allow more elaborate interaction, the behavior will tend to be analogous to groping in the real world. This is because such behavior is the most offensive behavior possible in the game and thus will create the strongest reaction.

While a person who enjoys inflicting this sort of abuse does have some moral problems, they are probably selecting their approach based on what they think will most hurt the target rather than based on a principled commitment to sexism, racism or other such value systems. To use an obvious analogy, think of a politician who is not a devoted racist but is willing to use this language to sway a target audience.

There are also those who engage in such harassment as a matter of ideology and values. While their behavior is often indistinguishable from those who engage in attacks of opportunity, their motivation is based on a hatred of specific types of people. While they might enjoy the reaction of their target, that is not their main objective. Rather, the objectives are to express their views and attack the target of their hate because of that hate. Put another way, they are sincere racists or sexists in that it matters to them who they attack. To use the analogy to a politician, they are like a demagogue who believes in their own hate speech.

In terms of virtual behavior, such as groping, these people are not just using groping as a tool to get a reaction. It is an attack to express their views about their target based on their hatred and contempt. Groping might also not merely be a means to an end, but a goal in itself as the groping has its own value to them.

While both sorts of harassers are morally wrong, it is an interesting question as to which is worse. It could be argued that the commitment to evil of the sincere harasser (the true racist or sexist) make them worse than the opportunist. After all, the opportunist is not committed to evil views, they just use their tools for their amusement. In contrast, the sincere harasser not only uses the tools, but believes in their actions and truly hates their target. That is, they are evil for real.

While this is appealing, it is worth considering that the sincere harasser has the virtue of honesty; their expression of hatred is not a deceit.  To go back to the politician analogy, they are like the politician who truly believes in their professed ideology. Their evil does have the tiny sparkle of the virtue of honesty.

In contrast, the opportunist is dishonest in their attacks and thus compounds their other vices with that of dishonesty. To use the politician analogy, they are like the Machiavellian manipulator who has no qualms about using hate to achieve their ends. 

While the moral distinctions between the types of harassers are important, they generally do not matter to their targets. After all, what matters to (for example) a female gamer who is being virtually groped while trying to enjoy a VR game is not the true motivation of the groper, but the groping. Thus, from the perspective of the target, the harasser of opportunity and the sincere harasser are on equally bad moral footing; they are both morally wrong. In the next essay, the discussion will turn to the obligations of gaming companies in regard to protecting gamers from harassment.

Having grown up in the golden age of CB radio, I have fond memories of movies about truck driving heroes played by the likes of Kurt Russell and Clint Eastwood. While such movies were a passing phase, real truck drivers are heroes of the American economy. In addition to moving stuff across this great nation, they also earn solid wages and thus contribute as taxpayers and consumers.

While most media attention has been on self-driving cars, there are plans to replace human truckers with self-driving trucks. The steps towards automation might initially be a boon to truck drivers as these technological advances provide new safety features. This progress will most likely lead to a truck with a human riding as a backup and eventually to a fully automated truck. But perhaps driverless vehicles are the vehicles of the future and always will be.

In terms of the consequences of full automation, there will be some positive impacts. While the automated trucks will be more expensive than manned vehicles initially, not needing to pay drivers will result in savings. There is also the fact that automated trucks, unlike human drivers, would not get tired, bored or distracted. While there will still be accidents, it would be reasonable to expect a significant decrease once technology matures. Such trucks would also be able to operate around the clock, stopping only to load or unload cargo, to refuel and for maintenance. This could increase the speed of deliveries. One can even imagine an automated truck with its own drones that fly away from the truck as it cruises the highway, making deliveries for companies like Amazon. While these will be good things, there will also be negative consequences.

The most obvious negative consequence is the elimination of trucker jobs. Currently, there are about 3.5 million drivers in the United States. There are also about 8.7 million other people employed in the trucking industry. One must also remember all the people indirectly associated with trucking, ranging from people cooking meals for truckers to folks manufacturing or selling products for truckers. Finally, there are also the other economic impacts from the loss of these jobs, ranging from the loss of tax revenues to lost business. After all, truckers do not just buy truck related goods and services.

While the loss of jobs will have a negative impact, it should be noted that the transition from manned trucks to robot rigs will not occur overnight. Assuming it occurs.  There will be a slow transition as technology is adopted, and there will be several years in which human truckers and robotruckers share the roads. This can allow for a planned transition that will mitigate the economic shock. That said, there will presumably come a day when drivers are given their pink slips in and lose their jobs to rolling robots. Since economic transitions resulting from technological changes are nothing new, it could be hoped that this transition would be managed in a way that mitigated the harm to those impacted.

It is also worth considering that the switch to automated trucking will, as technological changes almost always do, create new jobs and modify old ones. The trucks will still need to be manufactured, managed and maintained. As such, new economic opportunities will be created. That said, it is easy to imagine these jobs also becoming automated as well: fleets of robotic trucks cruising America, loaded, unloaded, managed and maintained by robots. To close, I will engage in a bit of sci-fi style speculation.

Oversimplifying things, the automation of jobs could lead to a utopian future in which humans are finally freed from the jobs that are fraught with danger and drudgery. The massive, automated productivity could mean plenty for all; thus bringing about the bright future of optimistic fiction. That said, this path could also lead into a dystopia: a world in which everything is done for humans, and they settle into a vacuous idleness they attempt to fill with empty calories and frivolous amusements.

There are, of course, many dystopian paths leading away from automation. Laying aside the usual machine takeover in which AI kills us all, it is easy to imagine a new “robo-planation” style economy in which a few elite owners control their robot slaves, while the masses have little or no employment. A rather more radical thought is to imagine a world in which humans are almost completely replaced, the automated economy hums along, generating numbers that are noted by the money machines and the few remaining money masters. The ultimate end might be a single computer that contains a virtual economy; clicking away to itself in electronic joy over its amassing of digital dollars while around it the ruins of human civilization decay and the world awaits the evolution of the next intelligent species to start the game anew.

 

Back in 2016 the Dallas police used a remotely operated robot to kill a suspect with a bomb. While this marked a new use for robots in the realm of domestic policing, the decision-making process was conventional. That is, humans decided to use the machine and then a human operator controlled it for the attack. As such the true policebot is still a thing of science fiction. That said, considering policebots provides an interesting way to discuss police profiling in a speculative setting. While it might be objected that the discussion should focus on real police profiling, there are advantages to discussing controversial matters within a speculative context. One important advantage is that such a setting can help dampen emotional responses and enable a more rational discussion. The speculative context helps make the discussion less threatening to some who might react with greater hostility to discussions focused on the actual world. Star Trek’s discussion of issues of race in the 1960s using science fiction is an excellent example of this sort of approach. Now, to the matter of policebots.

The policebots under consideration are those that would be capable of a high degree of autonomous operation. At the low end of autonomy, they could be deployed to enforce traffic laws on their own, such as tracking speeding and issuing tickets. On the higher end, they could operate autonomously to conduct arrests of suspects who might resist arrest violently. Near the highest end would be robotic police at least as capable as human beings. Beyond that would be supercops.

While there are legitimate worries that policebots could be used as unquestioning servants of the state to oppress and control elements of the population (something we will certainly see), there are also good reasons for using advanced policebots. One obvious advantage is that policebots would be more resilient and easier to repair than human officers. Policebots that are not people would also be more expendable and thus could save human lives by taking on the dangerous tasks of policing (such as engaging armed suspects). Another advantage is that robots will probably not get tired or bored, thus allowing them to patrol around the clock with maximum efficiency. Robots are also unlikely to be subject to the corrupting factors that influence humans or suffer from personal issues, such as going through a divorce. There is also the possibility that policebots could be more objective than human officers. This is, in fact, the main concern of this essay.

Like a human office, policbots would need to identify criminal behavior. In some cases, this would be easy. For example, an autonomous police drone could easily spot and ticket most speeding violations. In other cases, this would be incredibly complicated. For example, a policebot patrolling a neighborhood would need to discern between children playing at cops & robbers and people engaged in actual violence. As another example, a policebot on patrol would need to be able to sort out the difference between a couple having a public argument and an assault in progress.

In addition to sorting out criminal behavior from non-criminal behavior, policebots would also need to decide on how to focus their attention. For example, a policebot would need to determine who gets special attention in a neighborhood because they are acting suspicious or seem to be out of place. Assuming that policebots would be programed, the decision-making process would be explicitly laid out in the code. Such focusing decisions would seem to be, by definition, based in profiling and this gives rise to important moral concerns.

Profiling that is based on behavior would seem to be acceptable, provided that such behavior is clearly linked to criminal activities and not to, as an example, ethnicity. For example, it would seem perfectly reasonable to focus attention on a person who tries to stick to the shadows around houses while paying undue attention to houses that seem to be unoccupied at the time. While such a person might be a shy fellow who likes staring at unlit houses as a pastime, there is a reasonable chance he is scouting the area for a robbery. As such, the policebot would be warranted in focusing on him.

The most obviously controversial area would be using certain demographic data for profiles. Young men tend to commit more crimes than middle-aged women. On the one hand, this would seem to be relevant data for programing a policebot. On the other hand, it could be argued that this would give the policebot a gender and age bias that would be morally wrong despite being factually accurate. It becomes vastly more controversial when data about such things as ethnicity, economic class and religion are considered. If accurate and objective data links such factors to a person being more likely to engage in crime, then a rather important moral concern arises. Obviously enough, if such data were not accurate, then it should not be included.

Sorting out the accuracy of such data can be problematic and there are sometimes circular appeals. For example, the right often defends the higher arrest rate of blacks by claiming that blacks commit more crimes than whites. When it is objected that higher arrest rate could be partially due to bias in policing, the reply is often that blacks commit more crimes and the proof is that blacks are arrested more than whites. That is, the justification runs in a circle.

But suppose that objective and accurate data showed links between demographic categories and crime. In that case, leaving it out of the programing could make policebots less effective. This could have the consequence of allowing more crimes to occur. This harm would need to be weighed against the harm of having the policebots programmed to profile based on such factors. One area of concern is public perception of the policebots and their use of profiling. This could have negative consequences that could outweigh the harm of having less efficient policebots.

Another area of potential harm is that even if the policebots operated on accurate data, they would still end up arresting people disproportionally, thus potentially causing harm that would exceed the harm done by the loss of effectiveness. This also ties into higher level moral concerns about the reasons why specific groups might commit more crimes than others and these reasons often include social injustice and economic inequality. As such, even “properly” programmed policebots could be arresting the victims of social and economic crimes. This suggests an interesting idea for a science fiction story: policebots that decide to reduce crime by going after the social and economic causes of crime rather than arresting people to enforce an unjust social order.

If humanity remains a single planet species, our extinction is all but assured as there are so many ways the world could end. The mundane self-inflicted apocalypses include war and environmental devastation. There are also more exotic dooms suitable for speculative science fiction, such as the robot apocalypse or a bioengineered plague. And, of course, there is the classic big rock from space scenario. While we will certainly bring our problems with us into space, getting off world would dramatically increase our chances of survival as a species.

While species do endeavor to survive, there is the moral question of whether we should do so. While I can easily imagine humanity reaching a state where it would be best if we did not continue, I think that our existence generates more positive value than negative value, thus providing the foundation for a utilitarian argument for our continued existence and endeavors to survive. This approach can also be countered on utilitarian grounds by contending that the evil we do outweighs the good, thus showing that the universe would be morally better without us. But, for the sake of the discussion that follows, I will assume that we should (or at least will) endeavor to survive.

Since getting off world is an excellent way of improving our survival odds, it is ironic that we are not suited for survival in space and on other worlds such as Mars. Obviously enough, exposure to the void would prove fatal very quickly; but even with technological protection our species copes poorly with the challenges of space travel.

While there are many challenges, there are some of special concern. These include the danger presented by radiation, the impact of living in gravity different from earth, the resource challenge, and the travel time problem. Any and all of these can be fatal and must be addressed if humanity is to expand beyond earth.

Our current approach is to use our technology to recreate our home environment. For example, our manned space vessels are designed to provide some radiation shielding, they are filled with air and are stocked with food and water. One advantage of this approach is that it does not require any modification to humans; we simply recreate our home in space or on another planet. There are, of course, many problems with this approach.

One is that our technology is still very limited and cannot properly address many challenges. For example, while artificial gravity is standard in science fiction, we now use mostly ineffective means of addressing the gravity problem. As another example, while we know how to block radiation, there is the challenge of being able to do this effectively on the journey from earth to Mars.

A second problem is that recreating our home environment can be difficult and costly. But it can be worth the cost to allow unmodified humans to survive in space or on other worlds. This approach points towards a Star Trek style future: normal humans operating within a bubble of technology. There are, however, alternatives.

Another approach is also based in technology but aims at either modifying humans or replacing them entirely. There are two main paths here. One is that of machine technology in which humans are augmented to endure the conditions of space and other worlds. The scanners of Cordwainer Smith’s “Scanners Live in Vain” are one example of this. They are modified to survive while operating interstellar vessels. Another example is Man Plus, Frederik Pohl’s novel about a human transformed into a cyborg to survive on Mars. The ultimate end of this path is the complete replacement of humans by intelligent machines, machines designed to match their environments and free of human vulnerabilities and short life spans.

The other is the path of biological technology. On this path, humans are modified biologically to better cope with non-earth environments. These modifications would presumably start modestly, such as genetic modifications to make humans more resistant to radiation and better adapted to lower gravity. As science progressed, the modifications could become more radical, with complete re-engineering of humans to make them ideally match their new environments. This path, unnaturally enough, could lead to the replacement of humans with new species.

These approaches do have advantages. While there would be an initial cost in modifying humans to better fit their new environments, the better the adaptations, the less need there would be to recreate earth-like conditions. This could result in considerable cost-savings and there is also the fact that the efficiency and comfort of the modified humans would be greater the better they matched their new environments. There are, however, the usual ethical concerns about such modifications.

Replacing homo sapiens with intelligent machines or customized organisms would also have a high initial startup cost, but these beings would be more effective than humans in the new environments. For example, an intelligent machine would be more resistant to radiation, could sustain itself with solar power, and could be effectively immortal as long as it is repaired. Such a being would be ideal to crew (or be) a deep space mission vessel. As another example, custom-created organisms or fully converted humans could ideally match an environment, living and working in radical conditions as easily as standard humans work on earth. Clifford D. Simak’s “Desertion” discusses such an approach; albeit one that has unexpected results on Jupiter.

In addition to the usual moral concerns about such things, there is also the concern that such creations would not preserve humans. On the one hand, it is obvious that such beings would not be homo sapiens. If the entire species was converted or gradually phased out in favor of the new beings, that would be the end of the species, the biological human race would be no more and the voice of humanity would fall silent. On the other hand, it could be argued that the transition could suffice to preserve the identity of the species.  A way to argue this would be to re-purpose the arguments used to argue for the persistence of personal identity across time. It could also be argued that while the biological species homo sapiens could cease to be, the identity of humanity is not set by biology but by things such as values and culture. As such, if our replacements retained the relevant connection to human culture and values (they sing human songs and remember the old, old places where once we walked), they would still be human, although not homo sapiens.

Peaceful protest is an integral part of America. As is murder. Back in 2016 the two collided in Dallas, Texas: after a peaceful protest, five police officers were murdered. While some might see it as ironic that police rushed to protect people protesting police violence, this reminds us about how police are supposed to function in a democratic society. This stands in stark contrast with the unnecessary deaths inflicted on citizens by bad officers, deaths that once caused the nations to briefly consider that such deaths might be worth preventing.

While violence and protests are worthy of in-depth discussion, my focus will be on the ethical questions raised by the use of a robot to deliver the explosive device was used to kill one of the attackers. While this matter was addressed by philosophers more famous than I, I thought it worthwhile to look back to 2016 to see if my thoughts have changed.

While the police robot is called a robot, it is more accurate to say it is a remotely operated vehicle. After all, the term “robot” implies some autonomy on the part of the machine. The police robot is remote controlled, like a sophisticated version of RC toys. In fact,  one could do the same thing  by putting an explosive on a toy.

Since there is a human operator directly controlling the machine, the ethics of the matter are the same ass if  conventional machines of death (such as a gun) had been used to kill the shooter. On the face of it, the only difference is in perception: a killer robot delivering a bomb sounds more ominous and controversial than an officer using a firearm. The use of remote-controlled vehicles to kill targets was nothing new as the basic technology has been around since at least WWII and the United States has killed many people with drones.

If this had been the first case of an autonomous police robot sent to kill (like an ED-209), then the issue would be different. However, it is a case that falls under established ethics of killing, only with a slight twist in regards to the delivery system. That said, it can be argued that the use of a remote-controlled machine is a morally relevant change.

Keith Abney raised a very reasonable point: if a robot could be sent to kill a target, it could also be sent to use non-lethal force to subdue the target. In the case of human officers, the usual moral justification of lethal force is that it is the best option for protecting themselves and others from a threat. If the threat presented by a suspect can be effectively addressed in a non-lethal manner, then that is the option that should be used. The moral foundation for this is set by the role of police in society: they are supposed to protect the public and should take every legitimate effort to deliver suspects for trial. They are not supposed to function as soldiers sent to defeat enemies. There are, of course, cases in which suspects cannot be safely captured and lethal force can be justified. A robot (or, more accurately, a remote-controlled machine) can radically change the equation.

While a police robot is an expensive piece of hardware, it is not a human being (or even an artificial being). As such, it only has the moral status of property. In contrast, even the worst human criminal is a human being and thus has a moral status above that of an object. If a robot is sent to engage a human suspect, then in many circumstances there would be no moral justification for using lethal force. After all, the officer operating the machine is in no danger. This should change the ethics of the use of force to match other cases in which a suspect needs to be subdued but presents no danger to the officer attempting arrest. In such cases, the machine should be outfitted with less-than-lethal options.

While television and movies make subduing someone safely seem easy, it is difficult to do. For example, the classic rifle butt to the head is a fictional favorite for knocking someone out, when doing that in the real world would cause serious injury or even death. Tasers, gas weapons and rubber bullets also can cause serious injury or death. However, the less-than-lethal options are less likely to kill a suspect and thus allow them to be captured for trial, which is supposed to be the point of law enforcement. Robots could be designed to both withstand gunfire and securely grab a suspect. While this is likely to result in injury (such as broken bones) and could kill, it would be less likely to kill than a bomb. An excellent example of a situation in which a robot would be ideal would be to capture an armed suspect barricaded in a structure.

It must be noted that there will be cases in which the use of lethal force via a robot is justified. These would include cases in which the suspect presents a clear and present danger to officers or civilians and the best chance of ending the threat is the use of such force. An example of this might be a hostage situation in which the hostage taker is likely to kill hostages while the robot is trying to subdue them with less-than-lethal force.

While police robots have long been the stuff of science fiction, they do present a potential technological solution to the moral and practical problem of keeping officers and suspects alive. While an officer might be legitimately reluctant to stake her life on less-than-lethal options when directly engaged with a suspect, an officer operating a robot faces no such risk. As such, if the deployment of less-than-lethal options via a robot would not put the public at unnecessary risk, then it would be morally right to use such means.

While most current body hacking technology is gimmicky and theatrical, it does have potential. It is, for example, easy enough to imagine that the currently very dangerous night-vision eye drops could be made into a safe product, allowing people to hack their eyes. There is also the cyberpunk future envisioned by writers such as William Gibson and games like Cyberpunk and Shadowrun. In such a future, people might body hack their way to being full cyborgs. In the nearer future, there might be augmentations like memory backups for the brain, implanted phones, and even subdermal weapons. Such augmenting hacks raise moral issues that go beyond the basic ethics of self-modification. Fortunately, these ethical matters can be effectively addressed by the application of existing moral theories and principles.

Since the basic ethics of self-modification were addressed in the previous essay, this essay will focus solely on the ethical issue of augmentation through body hacking. This issue does, of course, stack with the other moral concerns.

In general, there seems to be nothing inherently wrong with the augmentation of the body through technology. The easy way to argue for this is to draw the obvious analogy to external augmentation: starting with sticks and rocks, humans augmented their natural capacities. If this is acceptable, then moving the augmentation under the skin should not open a new moral world.

The easy and obvious objection is to contend that under the skin is a new moral world. That, for example, a smart phone carried in a pocket is one thing, while a smartphone embedded in the skull is another.

This objection does have merit: implanting technology is morally significant. At the very least, there are moral concerns about potential health risks. However, this moral concern is about the medical aspects, not about the augmentation itself. This is not to say that the health issues are not important, they are very important; but fall under another moral issue.

If it is accepted that augmentation is, in general, morally acceptable, there are still legitimate concerns about specific types of augmentation and the context in which they are employed. Fortunately, there is established moral discussion about these categories of augmentation.

Two areas in which augmentation is of concern are sports and games. Athletes have long engaged in body hacking, if the use of drugs can be considered body hacking. While those playing games like poker generally do not use enhancing drugs, they have attempted to cheat with technology. While future body hacks might be more dramatic, they would seem to fall under the same principles that govern the use of augmenting substances and equipment in current sports. For example, an implanted device that stores extra blood to be added during the competition would be analogous to existing methods of blood doping. As another example, a poker or chess player might implant a computer that she can use to cheat at the game.

While specific body hacks will need to be addressed by the appropriate governing bodies of sports and games, the basic principle that cheating is morally unacceptable still applies. As such, the ethics of body hacking in sports and games is easy enough to handle in the general and the real challenge will be sorting out which hacks are cheating and which are acceptable. In any case, some interesting scandals can be expected.

The field of academics is also an area of concern. Since students are adept at using technology such as AI to cheat, there will be efforts to cheat through body hacking. As with cheating in sports and games, the basic ethical framework is well-established: creating is morally unacceptable. As with sports and games, the challenge will be sorting out which hacks are considered cheating, and which are not. If body hacking becomes mainstream, it can be expected that education and testing will need to change as well as the was counts as cheating. Using an analogy, calculators are usually now allowed on tests and thus the future might see implanted computers being allowed for certain tests. Testing of memory might also become pointless. If most people have implanted devices that can store data and link to the internet, memorizing things might cease to be a skill worth testing. This does, however, segue into the usual moral concerns about people losing abilities or becoming weaker due to technology. Since these are general concerns that have applied to everything from the abacus to the automobile, I will not address this issue here.

There is also the broad realm composed of all the other areas of life that do not generally have specific moral rules about cheating through augmentation. These include such areas as business and dating. While there are moral rules about certain forms of cheating, the likely forms of body hacking would not seem to be considered cheating in such areas, though they might be regarded as providing an unfair advantage, especially in cases in which the wealthy classes are able to gain even more advantages over the less well-off classes.

As an example, a company with might use body hacking to upgrade its employees so they can be more effective, thus providing a competitive edge over lesser companies.  While it seems likely that certain augmentations will be regarded as unfair enough to require restriction, body hacking would merely change the means and not the underlying game. That is, the well-off always have advantages over the less-well-off. Body hacking would just be a new tool in the competition. Hence, existing ethical principles would apply here as well. Or not be applied, as is so often the case when money is on the line.

So, while body hacking for augmentation will require some new applications of existing moral theories and principles, it does not make a significant change in the moral landscape. Like almost all changes in technology it will merely provide new ways of doing old things. Like cheating in school or sports. Or life.

While body hacking is sometimes presented as being new and radical, humans have been engaged in the practice (under other names) for quite some time. One of the earliest forms of true body hacking was probably the use of prosthetic parts to replace lost pieces, such as a leg or hand. These hacks were aimed at restoring a degree of functionality, so they were practical hacks.

While most contemporary body hacking seems aimed at gimmicks or limited attempts at augmentation, there are serious applications that involve replacement and restoration. One example of this is the color blind person who is using a skull mounted camera to provide audio clues regarding colors. This hack serves as a replacement to missing components of the eye, albeit in a somewhat unusual way.

Medicine is, obviously enough, replete with body hacks ranging from contact lenses to prosthetic limbs. These technologies and devices provide people with some degree of replacement and restoration for capabilities they lost or never had. While these sorts of hacks are typically handled by medical professionals, advances in existing technology and the rise of new technologies will result in more practical hacks aimed not at gimmicks but at restoration and replacement. There will also be considerable efforts aimed at augmentation, but this matter will be addressed in the next essay.

Since humans have been body hacking for replacement and restoration for thousands of years, the ethics of this matter are well settled. In general, the use of technology for medical reasons of replacement or restoration is morally unproblematic. After all, this process is simply fulfilling the main purpose of medicine: to get a person as close to their normal healthy state as possible. To use a specific example, there really is no moral controversy over the use of prosthetic limbs that are designed to restore functionality. In the case of body hacks, the same general principle would apply and hacks that aim at restoration or replacement are morally unproblematic. That said, there are some potential areas of concern.

One area of both moral and practical concern is the risk of amateur or DIY body hacking. The concern is that such hacking could have negative consequences. This might be due to bad design, poor implementation or other causes. For example, a person might endeavor a hack to replace a missing leg and have it fail catastrophically, resulting in a serious injury. This is, of course, not unique to body hacking, this is a general matter of good decision making.

As with health and medicine in general, it is usually preferable to go with a professional rather than an amateur or a DIY approach (at least in serious matters). Also, the possibility of harm makes it a matter of moral concern. That said, there are many people who cannot afford professional care and technology will afford people an ever-growing opportunity to body hack for medical reasons. This sort of self-help can be justified on the grounds that some restoration or replacement is better than none. This assumes that self-help efforts do not result in worse harm than doing nothing. As such, body hackers and society will need to consider the ethics of the risks of amateur and DIY body hacking. Guidance can be found here in existing medical ethics, such as moral guides for people attempting to practice medicine on themselves and others without proper medical training.

A second area of moral concern is that some people will engage in replacing fully functional parts with body hacks that are equal or inferior to the original (augmentation will be addressed in the next essay). For example, a person might want to remove a finger to replace it with a mechanical finger with a built in USB drive. As another example, a person might want to replace her eye with a camera comparable or inferior to her natural eye.

One clear moral concern is the potential dangers in such hacks as removing a body part can be dangerous. One approach would be to weigh the harms and benefits of such hacking. On the face of it, such replacement hacks would seem to be at best neutral, that is, the person will end up with the same capabilities as before. It is also possible, perhaps likely, that the replacement attempt will result in diminished capabilities, thus making the hack wrong because of the harm inflicted. Some body hackers might argue that such hacks have a value beyond functionality. For example, the value of self-expression or achieving a state of existence that matches one’s conception or vision of self. In such cases, the moral question would be whether these factors are worth considering and if they are, how much weight they should be given morally.

There is also the worry that such hacks would be a form of unnecessary self-mutilation and thus at best morally dubious. A counter to this is to argue, as John Stuart Mill did, that people have a right to self-harm, if they do not harm others.  That said, arguing that people do not have a right to interfere with self-harm (provided the person is acting freely and rationally) does not entail that self-harm is morally acceptable. It is certainly possible to argue against self-harm on utilitarian grounds and based on moral obligations to oneself. Arguments from the context of virtue theory would also apply as self-harm is contrary to developing one’s excellence as a person.

These approaches could be countered. Utilitarian arguments can be met with utilitarian arguments that offer a different evaluation of the harms and benefits. Arguments based on obligations to oneself can be countered by arguing that there are not such obligations or that the obligations one does have allows from this sort of modification. Argument from virtue theory could be countered by attacking the theory itself or showing how such modifications are consistent with moral excellence.

My own view, which I consistently apply to other areas such as drug use, diet, and exercise, is that people have a moral right to freedom of self-abuse and harm. This requires that the person can make an informed decision and is not coerced or misled. As such, I hold that a person has every right to DIY body hacking. Since I also accept the principle of harm, I hold that society has a moral right to regulate body hacking of others as other similar practices (such as dentistry) are regulated. This is to prevent harm being inflicted on others. Being fond of virtue theory, I do hold that people should not engage in self-harm, even though they have every right to do so without having their liberty restricted. To use a concrete example, if someone wants to spoon out her eyeball and replace it with an LED light, then she has every right to do so. However, if an untrained person wants to set up a shop and scoop eyeballs for replacement with lights, then society has every right to prevent that. I do think that scooping out an eye would be both foolish and morally wrong, which is also how I look at heroin use and smoking tobacco.

 

It might seem like woke madness to claim that medical devices can be biased. Are there white supremacist stethoscopes? Misogynistic MRI machines? Extremely racist X-Ray machines? Obviously not, medical devices do not have beliefs or ideologies (yet). But they can still be biased in their accuracy and effectiveness.

One example of a biased device is the pulse oximeter. This device measures blood oxygen by using light. You have probably had one clipped on your finger during a visit to your doctor. Or you might even own one. The bias in this device is that it is three times more likely to not reveal low oxygen levels in dark skinned patients than light skinned patients.  As would be expected, there are other devices that have problems with accuracy when used on people who have darker skins. These are essential sensor biases (or defects). In most cases, these can be addressed by improving the sensors or developing alternative devices. The problem is, to exaggerate a bit, is that most medical technology is made by white men for white men. This is not to claim such biased devices are all cases of intentional racism and misogyny. There is not, one assumes, a conspiracy against women and people of color in this area but there is a bias problem.  In addition to biased hardware, there is also biased software.

Many medical devices use software, and it is often used in medical diagnosis. People are often inclined to think software is unbiased, perhaps because of science fiction tropes about objective and unfeeling machines. While it is true that our current software does not feel or think, bias can make its way into the code. For example, software used to analyze chest x-rays would work less well on women than men if the software was “trained” only on X-rays of men. The movie Prometheus has an excellent fictional example of a gender-biased auto-doc that lacks the software to treat female patients.

These software issues can be addressed by using diverse training groups for software and taking steps to test software for bias by using a diverse testing group. Also, having a more diverse set of people working on such technology would probably also help.

Another factor is analogous to user error, which is user bias. People, unlike devices, do have biases and these can and do impact how they use medical devices and their data. Bias in healthcare is well documented. While overt and conscious racism and sexism are rare, sexism and subtle racism are still problems. Addressing this widespread problem is more challenging than addressing biases in hardware and software. But if we want fair and unbiased healthcare, it is a problem that must be addressed.

As to why these biases should be addressed, this is a matter of ethics. To allow bias to harm patients goes against the fundamental purpose of medicine, which is to heal people. From a utilitarian standpoint, addressing this bias would be the right thing to do: it would create more positive value than negative value. This is because there would be more accurate medical data and better treatment of patients.

In terms of a counterargument, one could contend that addressing bias would increase costs and thus should not be done. There are several easy and obvious replies. One is that the cost increase would be, at worst, minor. For example, testing devices on a more diverse population would not seem meaningfully more expensive than not doing that. Another is that patients and society pay a far greater price in terms of illness and its effects than it would cost to address medical bias. For those focused on the bottom line, workers who are not properly treated can cost corporations some of their profit and ongoing health issues can cost taxpayer money.

One can, of course, advance racist and sexist arguments by professing outrage at “wokeness” attempting to “ruin” medicine by “ramming diversity down throats” or however Fox news would put it. Such “arguments” would be aimed at preserving the harm done to women and people of color, which is an evil thing to do. One might hope that these folks would be hard pressed to turn, for example, pulse oximeters into a battlefront of the culture war. But these are the same folks who professed to lose their minds over Mr. Potato Head and went on a bizarre rampage against a grad school level theory that has been around since the 1970s. They are also the same folks who have gone anti-vax in during a pandemic, encouraging people to buy tickets in the death lottery. But the right thing to do is to choose life.