8

Asteroid and lunar mining are the stuff of science fiction, but there are those working to make them a reality.  While the idea of space mining might seem far-fetched, asteroids and the moon contain useful resources. While the idea of space mining probably brings to mind images of belters extracting gold, one of the most valuable resources in space is water. Though cheap and plentiful on earth, it is very expensive to transport it into space. While the most obvious use of space water is for human consumption, it also provides raw material for fuels and many uses in industry. Naturally, miners will also seek minerals, metals and other resources.

My love of science fiction, especially GDW’s classic role playing game Traveller, makes me like the idea of space mining. For me, that is part of the future we were promised. But, as a philosopher, I have ethical concerns.

As with any sort of mining, two moral concerns are the impact on the environment and the impact on humans. Terrestrial mining has been devastating to the environment. This includes the direct damage caused by extracting the resources and the secondary effects, such as lasting chemical contamination. These environmental impacts in turn impact human populations.  These impacts can include directly killing people (a failed retaining wall that causes drowning deaths) and indirectly harming people (such as contamination of the water supply). As such, mining on earth involves serious moral concerns. In contrast, space mining would seem to avoid these problems.

Unlike the heavily populated planet earth, asteroids and the moon are lifeless rocks in space. As such, they do not seem to have any ecosystems to damage. While the asteroids that are mined will often be destroyed in the process, it is difficult to argue that destroying an asteroid would be wrong based on environmental concerns. While destroying the moon would be bad, mining operations there would seem to be morally acceptable because one could argue that there is no environment to worry about.

Since space mining takes place in space, the human population of earth will (probably) be safely away from any side effects of mining. It is worth noting that should humans colonize the moon or asteroids, then space mining could harm these populations. But, for the foreseeable future, there will be no humans living near the mining areas. Because of the lack of harm, space mining would seem to be morally acceptable.

It might be objected that asteroids and the moon be left unmined despite the absence of life and ecosystems. The challenge is making the case why mining lifeless rocks would be wrong. One possible approach is to contend that the asteroids and the moon have rights that would make mining them wrong. However, rocks do not seem to be the sort of thing that can have rights. Another approach is to argue that people who care about asteroids and the moon would be harmed. While I am open to arguments that would grant these rocks protection from mining, the burden of proof is on those who wish to make this claim.

Thus, it would seem there are not any reasonable moral arguments against the mining of the asteroids based on environmental concerns or potential harm to humans. That could, of course, change if ecosystems were found on asteroids or if it turned out that the asteroids performed an important role in the solar system that affected terrestrial ecosystems. While this result favors space mining, the moral concerns are not limited to environmental harms.

There are, as would be suspected, the usual moral concerns about the working conditions and pay of space miners. Of course, these concerns are not specific to space mining and going into labor ethics would take this short essay too far afield. However, the situation in space does make the ethics of ownership relevant.

From a moral standpoint, the ethics of who can rightfully claim ownership of asteroids and the moon is of great concern. From a practical standpoint, it is reasonable to expect that matters will play out as usual: those with guns and money will decide who owns the space rocks. If it follows the usual pattern, corporations will end up owning the rocks and will exploit them. But how things will probably play out does not determine how they should play out. Fortunately, philosophers considered this sort of situation long ago,

While past philosophers probably did not give much thought to space mining, asteroids (and the moon) fit into the state of nature scenarios envisioned by thinkers like Hobbes and Locke.  They are resources in abundance with no effective authority over them. Naturally, the authorities can do things on earth to people involved with activities in space, but it will be quite some time before there are space police (though we have a Space Force).

Since there are no rightful owners (or, alternatively, we are all potentially rightful owners), it is tempting to claim the resources are there for the taking. That is, the resources belong to whoever, in Locke’s terms, mixes their labor with it and makes it their own (or more likely their employer’s own). This does have a certain appeal. After all, if me and my fellows in Mike’s Space Mining construct a robot ship that flies out to asteroid and mines it, we seem to have earned the right to those resources through our efforts. Before our ship mined it for water and metal, these valuable resources were just drifting in space, surrounded by rock. It would thus seem to follow that we would have the right to grab as many asteroids as we can. To be fair, our competitors would have the same right. This would be a rock rush in space.

But Locke also has his proviso: those who take from the common resources must leave as much and as good for others. While this proviso has been grotesquely violated on earth, the asteroids provide us with a new opportunity to consider how to share (or not) these abundant resources.

It can be argued that there is no obligation to leave as much and as good for others in space and that things should be on a strict first grab, first get approach. After all, the people who get their equipment into space would have done the work (or put up the money) and hence (as argued above) be entitled to all they can grab and use or sell. Other people are free to grab what they can, if they have access to the resources needed to reach and mine the asteroids. Naturally, the folks who lack the resources to compete will end up, as they always do, out of luck. 

While this has a certain selfish appeal, a case can be made for sharing. One obvious reason is that the people who reach the asteroids first to mine them did not create the ability to do so out of nothing. After all, reaching the asteroids will be the result of centuries of human civilization that made such technology possible. As such, there would seem to be a general debt owed to humanity and paying this off would involve contributing to the general good of humanity. Naturally, this line of reasoning can be countered by arguing that successful miners will benefit humanity when their profits “trickle down” from space. It could also be argued that the idea of a debt to past generations is absurd as is the notion of the general good of humanity. This is, of course, the view that the selfish and ungrateful would embrace.

Second, there is concern for not only the people who are alive today but also for the people to be. To use an analogy, think of a buffet line at a party. The fact that I am first in line does not give me the right to devour everything I can stuff into my snack port. If I did that at a party, I would be rightly seen as a terrible person. It also does not give me the right to grab whatever I cannot eat so I can sell it to those who have the misfortune to be behind me in line. Again, if I did that, I would be rightly regarded as a horrible person who should be banned from parties. So, these resources should be treated in a similar manner, namely fairly and with some concern for those who are behind the first people in line. As such, the mining of space resources should include limits aimed at benefiting those who do not happen to get there first to grab the goodies. To be fair, behavior that would get a person kicked out of a party is often lauded in the business world, for that realm normalizes and lauds awful behavior.

In closing, it should be noted that space is really big. Because of this, it could be argued that there are plenty of resources out there, so it is morally acceptable for the people who get there first to grab as much as they can. After all, no matter how much they grab, there will be plenty left. While this does have some appeal, there is an obvious problem: it is not just a matter of how much is out there, but how much can be reached at this time. Going back to the buffet analogy, if I stuffed myself with as much as I could grab and started trying to sell the rest to others behind me in line, then yelling “there are other buffets out there” would not get me off the moral hook.

It is common practice to sequence infants to test for various conditions. From a moral standpoint, it seems obvious that these tests should be applied and expanded as rapidly as cost and technology permit (if the tests are useful, of course). The main argument is utilitarian: these tests can find dangerous, even lethal conditions that might not be otherwise noticed until it is too late. Even when such conditions cannot be cured, they can often be mitigated. As such, there would seem to be no room for debate on this matter. But, of course, there is.

One concern is the limited availability of medical services. Once an infant is sequenced, parents will need experts to interpret the results. If sequencing is expanded, this will involve dividing limited resources, which will create the usual problems. While the obvious solution is to train more people to interpret results, this faces the usual problems of expanding the number of available medical experts. Another resource problem will arise when problems are found. Parents who have the means will want to address the issues the tests expose, but not everyone has the resources. Also of concern is the fact that conditions that can be found by sequencing can manifest at different times: some will become problems early in life, others manifest later. This raises the problem of distributing access to the limited number of specialists so that infants with immediate needs get priority access.

One obvious reply to the concerns about access is that this is not a special problem for infant sequencing; it runs broadly across health care. And, of course, there is already a “solution”: the rich and connected get priority access to care. The same “solution” will presumably also be applied in the case of sequencing infants.

Another sensible reply to these concerns is that these are not problems with sequencing, but problems with the medical system. That is, shortages of medical experts and difficulty in accessing the system based on need. Sequencing infants will put more burden on the system and this does raise the moral question of whether the burden will be worth the return. On the face of it, of course, improving medical care for infants would seem to be worth it.

A second concern about sequencing is that, like other medical tests, it might end up doing more harm than good. On the face of it, this might seem an absurd thing to claim: how could a medical test do more harm than good? After all, knowing about potential health threats ahead of time is analogous to soldiers knowing of an upcoming ambush, or a community knowing about an incoming storm before it arrives. In all these cases, foreknowledge is good because it allows people to prepare and makes it more likely that they will succeed. As such, sequencing is the right thing to do.

While this view of foreknowledge is plausible, medical tests are not an unmitigated good. After all, medical tests can create anxiety and distress that create more harm than the good they do. There is also an established history of medical tests that are wasteful and, worse, those that end up causing significant medical harm. Because of the potential for such harms, it would be unethical to simply rush to expand sequencing. Instead, the accuracy and usefulness of the tests need to be  determined.

It might be countered, with great emotion, that if even a single child is saved by rapidly expanding sequencing, then it would be worth it. The rational reply is, of course, that it would not be worth it if expanding the sequencing too quickly ended up hurting many children. As such, the right thing to do is to address the possible risks rationally and avoid getting led astray by fear and hope.

 

By JStark1809 / Deterrence Dispensed

In 2013 Defense Distributed created a working pistol using a $8,000 3D printer. This raised the specter of people printing guns and created quite a stir. The company made the news again in 2018 when Cody Wilson, an anarchist and owner of the company, was the subject of a lawsuit aimed at banning him from selling files for printing guns. As expected, this re-ignited the moral panic of 2013. Most recently, it is alleged that UnitedHealthcare CEO Brian Thompson was killed with a printed pistol and silencer.

While the idea of criminals, terrorists and others printing their own guns might seem alarming, it is important to consider the facts. As has often been pointed out, the 3D printer needed to make a functioning gun costs about $5,000 on the low end. By comparison, an AR-15 costs between $800 and $1200, while decent 9mm pistols are in the $400-700 range.  As such, 3D printing a gun does not make much financial sense unless a person is making guns in bulk. If a person wants a gun, they can easily buy several good guns for less than the cost of the printer. 

A second important point is that the most basic printed gun is not much of a gun: it is a single shot, low caliber weapon. While it could hurt or kill a person, it would be almost useless for someone intending to engage in a mass shooting and probably not very useful in most criminal endeavors. A criminal or terrorist would be foolish to choose such a weapon over a normal gun. While better guns can be printed, as the shooting of Thompson seems to illustrate, they are not as good as a manufactured firearm.

One reasonable reply to this view is to note that there are people who cannot legally own guns but who can own a 3D printer. These people, the argument goes, could print guns to commit their misdeeds. The easy and obvious reply is that a person willing to break the law to illegally possess a printed gun (and use it in crimes) can easily acquire a manufactured gun for less than the cost of the printer.

It can be countered that there are, for whatever reason, people who want an illegal gun but are unable or unwilling to buy a manufactured gun illegally. For them, the printed gun would be their only option. But guns can be made using legal hardware readily available at a hardware store. This sort of improvised gun (often called a “zip gun”) is easy to make. Directions for these weapons are readily available on the internet and the parts are cheap. For those who cannot acquire bullets, there are evenplans to make pneumatic weapons. Printing a gun just automates the process of making a homemade gun at a relatively high cost. So, the moral panic over the printed gun is fundamentally misguided: it is just a technological variant of the worry that bad people will make guns at home. And the reality is that the more sensible worry is that bad people will just buy or steal manufactured guns.

While people do make their own guns, people prefer manufactured guns when engaging in crimes and terrorist attacks for obvious reasons. Thus, being worried about the threat posed by 3D printers and gun plans is like being worried about hardware stores and plans for zip guns. While people can use them to make weapons, people are more likely to use them for legitimate purposes and get their weapons some other way, such as buying or stealing them.

One could persist in arguing that the 3D printed gun could still be the only option for some terrorists. But I suspect they would forgo making homemade guns in favor of homemade bombs. After all, a homemade bomb is far more effective than a homemade gun for terrorism. As such, there seems to be little reason to be worried about people printing guns to commit crimes or make terrorists attacks. Manufactured guns and more destructive weapons are readily available to everyone in the United States, so bans on printing guns or their plans would not make us any safer in terms of crime and terrorism. That said, a concern does remain.

While printing a gun to bypass the law makes little sense, there is the reasonable concern that people will print guns to bypass metal detectors. While the stock printed gun uses a metal firing pin, it would be easy enough to get this through security. The rounds would, of course, pose a bit of challenge—although non-metallic casings and bullets can be made. With such a gun, a would-be assassin could get into a government building, or a would-be terrorist could get onto a plane. Or so one might think.

While this is a matter of concern, there are two points worth noting. First, as mentioned above, the stock printed gun is a single-shot low caliber weapon, which limits the damage a person can do with it. Second, while the gun is plastic, it is not invisible. It can be found by inspection and would show up on an X-ray or body scan. As such, the threat posed by such guns is low. There is also the fact that one does not need a 3D printer to make a gun that can get past a metal detector.  

While the printers available to most people cannot create high quality weapons, there is the concern that advances will allow the affordable production of effective firearms. For example, a low-cost home 3D printer that could produce a fully functional assault rifle or submachinegun would be a problem. Of course, the printer would still need to be a cheaper and easier option than just buying or stealing guns, which are incredibly easy in the United States.

As a final point of concern, there is also the matter of the ban on gun plans. Some have argued that to make the distribution of these plans illegal violates the First Amendment, which provides a legal right. There is also the moral right of free expression. In this case, like other cases, it is a matter of weighing the harms of the expression against the harm inflicted by restricting it. Given the above arguments, the threat presented by printable guns does not warrant the restriction of the freedom of expression. As such, outlawing such plans would be immoral.  To use an analogy, it would be like banning recipes for unhealthy foods and guides on how to make cigarettes when they are readily available for purchase everywhere in the United States.

 

 

 

While exoskeletons are being developed primarily for military, medical and commercial applications, they have obvious potential for use in play. For example, new sports might be created in which athletes wear exoskeletons to enable greater performance.

From a moral standpoint, the use of exoskeletons in sports designed for them raises no special issues. After all, the creation of motorized sports is as old as the motor and this territory is well known. As such, exoskeletons in sports designed for them are no different from the use of racing cars or motorcycles. In fact, exoskeleton racing is likely to be one of the first exoskeleton sports.

It is worth noting that exoskeletons could be added to existing sports such as cross-country running, track or football. But the idea of using mechanized technology in such sports doesn’t really break new ground. To illustrate, having runners compete while wearing exoskeletons would be like having bicyclists replace their pedaled bikes with electric bikes. This would simply create a new, mechanized sport.

Adding exoskeletons to existing sports could create safety problems. For example, American football with exoskeletons could be lethal. As another example, athletes running around a track with exoskeletons could result in serious collision injuries. However, these matters do not create new ethical territory. Issues of equipment and safety are old concerns and can be resolved for exoskeletons, most likely after some terrible accidents, using established moral principles about safe competition. For example, there are already principles governing the frequency and severity of tolerable injuries in sports that would also apply to exosports. Naturally, each sport does tend to have different levels of what is considered tolerable (football versus basketball, for example), so the specific details for these new sports will need to be sorted out. Another area of moral concern is the use of exoskeletons in cheating.

While current exoskeleton technology would impossible to hide during athletic competitions like running and biking, future exoskeletons could be hidden under clothing and could be used to cheat. While this would create a new way to cheat, it would not require the creation of any new ethical theory about cheating. After all, what matters most morally in cheating is the cheating, not the specific means used. As such, whether an athlete is getting an unfair edge with an exoskeleton, blood doping, performance enhancing drugs, or cutting the course they are cheating and hence doing something wrong.

While exoskeletons have yet to be used to cheat, there is already an established concept of the use of “technological fraud” in competition. The first apparent case appeared a few years ago, when a cyclist was accused of using a bike with a motor concealed in its frame. Since people had speculated about this possibility, there were already terms for it: “mechanical doping” and “bike doping.” Using a hidden exoskeleton would be analogous to using a hidden motor on a bike. The only difference is that the hidden motor directly enhances the bike while an exoskeleton for the biker would enhance them. But there is no moral difference whether the motor is enhancing the bike directly or enhancing the athlete.  As such, the ethics of cheating with an exoskeleton are already settled, even before exo-cheating has occurred.

One final, somewhat sci-fi, concern is the use of exoskeletons will weaken people. While a person must move to use an exoskeleton, the ones used for play will enhance a person’s abilities and do much of the work for them. Researchers are already talking about running at 20 MPH through the woods for hours without getting tired. While I admit that this sounds fun (aside from colliding with trees), a worry is that this would be more like riding a motorcycle (which does all the work) than riding a bike (which augments the effort).

An obvious reply is to point out that I myself made the obvious comparison to riding a motorcycle. The use of an exoskeleton would not be fundamentally different from riding a motorcycle through the woods and there is nothing wrong with that (on designated trails). This is a reasonable point and I have no more objection to people exorunning (in designated areas) for entertainment than I do to people riding motorcycles (in designated areas). However, I do worry that exoskeletons could make things too easy for people.

While things like mobility scooters do exist, an exoskeleton would go beyond them. After all, a full body exoskeleton would not only provide easy mobility, but also do the work for the person’s arms. While this would be a blessing for a person with a serious medical condition, it would enable otherwise healthy people to avoid even the small amount of exercise most people cannot avoid today (like walking from their car to work or a store).

The sensible reply to my concern is to point out that most people do not use mobility scooters to get around when they do not actually need them, so the same would hold true of exoskeletons (assuming they become as cheap as mobility scooters). However, given the impact of automobiles and other technology on fitness levels, it is worth having some concern about the harmful effects of exoskeletons making things too easy. Unlike a car, a person could wear their exoskeleton into their workplace or the store, avoiding all the need to walk on their own. While the movie WALL-E did not have exoskeletons, it did show the perils of technology that makes things far too easy for humans and it is worth keeping that in mind as a (fictional) cautionary tale.

 

An exoskeleton is a powered frame that attaches to the body to provide support and strength. The movie Live, Die Repeat: Edge of Tomorrow featured combat exoskeletons. These fictional devices allow soldiers to run faster and longer while carrying heavier loads, giving them an advantage in combat. There are also peaceful applications of technology, such as allowing people with injuries to walk and augmenting human abilities for the workplace. For those concerned with fine details of nerdiness, exoskeletons should not be confused with cybernetic parts (these fully replace body parts, such as limbs or eyes) or powered armor (like that used in the novel Starship Troopers and by Iron Man).

As with any new technology, the development of exoskeletons raises ethical questions. Fortunately, humans have been using technological enhancements since we started being human, so this is familiar territory. Noel Sharkey raises one moral concern, namely that “You could have exoskeletons on building sites that would help people not get so physically tired, but working longer would make you mentally tired and we don’t have a means of stopping that.” His proposed solution is an exoskeleton that switches off after six hours.

A similar problem arose with earlier technology that reduced the physical fatigue of working. For example, the development of early factory and farming equipment allowed people to work longer hours and more efficiently. Modern technology has made such work even easier. For example, a worker can drive a high-tech farm combine as easily as driving a car.  Closer analogies to exoskeletons include such things as fork-lifts and cranes: a person can operate those to easily lift heavy loads that would be exhausting or impossible to do with mere muscles. So, Sharkey’s concern would also apply to the forklift: a person could drive one around for six hours and not be very tired physically yet become mentally tired. As such, whatever moral solutions applicable to the problem of forklifts also apply to exoskeletons.

Mental overwork is not a problem limited to exoskeletons or technology in general. After all, many jobs are not very physically tiring and people can keep writing legal briefs, teaching classes and managing workers to the point of mental exhaustion without being physically exhausted.

 For those who consider such overwork to be undesirable, the solution lies in workplace regulation or the (always vain) hope that employers will do the right thing. Without regulations protecting workers from being overworked, in the future employers would presumably either buy exoskeletons without timers or develop work-arounds, such as resetting timers.

Also, exoskeletons themselves do not get tired, so putting a timer on an exoskeleton would be like putting a use timer on a forklift. Doing so would reduce the value of the equipment, since it could not be used for multiple shifts. As such, that sort of timer system would be unfair to the employers in that they would be paying for equipment that should be usable round the clock but would instead be limited.  An easy fix would be a system linking the timer to the worker: the exoskeleton timer would reset when equipped by a new worker. This creates problems about incorporating work limits into hardware rather than by using regulation and policy about the limits of work. In any case, while exoskeletons would be new in the workplace, they add nothing new to the moral landscape. Technology that allows workers to be mentally overworked while not being physically overworked is nothing new and existing solutions can be applied if exoskeletons become part of the workplace, just as was done when forklifts were introduced.

 

In the last essay I suggested that although a re-animation is not a person, it could be seen as a virtual person. This sort of virtual personhood can provide a foundation for a moral argument against re-animating celebrities. To make my case, I will use Kant’s arguments about the moral status of animals.

Kant claims that animals are means rather than ends because they are objects. Rational beings, in contrast, are ends. For Kant, this distinction is based on his belief that rational beings can chose to follow the moral law. Because they lack reason, animals cannot do this.  Since animals are means and not ends, Kant claims we have no direct duties to animals. They belong with the other “objects of our inclinations” that derive value from the value we give them. Rational beings have intrinsic value while objects (including animals) have only extrinsic value. While this would seem to show that animals do not matter to Kant, he argues we should be kind to them.

While Kant denies we have any direct duties to animals, he “smuggles” in duties to them in a clever way: our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing something would create an obligation, then an animal doing something similar would create a similar moral obligation. For example, if Alfred has faithfully served Bruce, Alfred should not be abandoned when he has grown old. Likewise, a dog who has served faithfully should not be abandoned or shot in their old age. While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (that is, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to abandon or shoot the old dog?

Kant’s answer appears consequentialist in character: he argues that if a person acts in inhumane ways towards animals (abandoning the dog, for example) then this is likely to damage their humanity. Since, as Kant sees it, humans do have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act. To support his view, Kant discusses how people develop cruelty: they often begin with animals and then work up to harming human beings.

Kant goes beyond merely saying we should not be cruel to animals; he encourages us to be kind. Of course, he does this because those who are kind to animals will develop more humane feelings towards humans. Animals seem to be moral practice for us: how we treat them is training for how we will treat human beings.

In the case of re-animated celebrities, the re-animations currently lack any meaningful moral status. They do not think or feel. As such, they seem to lack the qualities that might give them a moral status of their own. While this might seem odd, these re-animations are, in Kant’s theory, morally equivalent to animals. As noted above, Kant sees animals are mere objects. The same is clearly true of the re-animations.

Of course, sticks and stones are also objects. Yet Kant would not argue that we should be kind to sticks and stones. Perhaps this would also apply to virtual beings such as a holographic Amy Winehouse. Perhaps it makes no sense to talk about good or bad relative to such virtual beings. Thus, the issue is whether virtual beings are more like animals or more like rocks.

I think a case can be made for treating virtual beings well. If Kant’s argument has merit, then the key concern about how non-rational beings are treated is how this behavior affects the person engaged in it. For example, if being cruel to a real dog could damage a person’s humanity, then a person should not be cruel to the dog.  This should also extend to virtual beings. For example, if creating and exploiting a re-animation of a dead celebrity to make money would damage a person’s humanity, then they should not do this.

If Kant is right, then re-animations of dead celebrities can have a virtual moral status that would make creating and exploiting them wrong. But this view can be countered by two lines of reasoning. The first is to argue that ownership rights override whatever indirect duties we might have to re-animations of the dead. In this case, while it might be wrong to create and exploit re-animations, the owner would have the moral right to do so. This is like how ownership rights can allow a person to have the right to do wrong to others, as paradoxical as this might seem. For example, slave owners believed they had the right to own and exploit their slaves. As another example, business owners often believe they have the right to exploit their employees by overworking and underpaying them. The counter to this is to argue against their being a moral right to do wrong to others for profit.

The second line of reasoning is to argue that re-animations are technological property and provide no foundation on which to build even an indirect obligation. On this view, there is no moral harm in exploiting such re-animations because doing so cannot cause a person to behave worse towards other people. This view does have some appeal, although the fact that many people have been critical of such re-animations as creepy and disrespectful does provide a strong counter to this view.

Socrates, it is claimed, was critical of writing and argued that it would weaken memory. Many centuries later, it was worried that television would “rot brains” and that calculators would destroy people’s ability to do math. More recently, computers, the internet, tablets, and smartphones were supposed to damage the minds of students. The latest worry is that AI will destroy the academy by destroying the minds of students.

There are two main worries about the negative impact of AI in this context. The first ties back to concerns about cheating: students will graduate and get jobs but be ignorant and incompetent because they used AI to cheat their way through school. For example, we could imagine an incompetent doctor who completed medical school only through their use of AI. This person would present a danger to their patients and could cause considerable harm up to and including death. As other examples, we could imagine engineers and lawyers who cheated their way to a degree with AI and are now dangerously incompetent. The engineers design flawed planes that crash, and the lawyers fail their clients, who end up in jail. And so on, for all other relevant professions.

While having incompetent people in professions is worrisome, this is not a new problem created by AI. While AI does provide a new way to cheat, cheating has always been a problem in higher education. And, as discussed in the previous essay, AI does not seem to have significantly increased cheating. As such, we can probably expect the level of incompetency resulting from cheating to remain relatively stable, despite the presence of AI. It is also worth mentioning that incompetent people often end up in positions and professions where they can do serious harm not because they engaged in academic cheating, but because of nepotism, cronyism, bribery, and influence. It is unlikely that AI will impact these factors and concerns about incompetence would be better focused on matters other than AI cheating.

The second worry takes us back to Socrates and calculators. This is the worry that students using technology “honestly” will make themselves weaker or even incompetent. In this scenario, the students would not be cheating their way to incompetence. Instead, they would be using AI in accordance with school policies and this would have deleterious consequences on their abilities.

A well-worn reply to this worry is to point to the examples at the beginning of this essay, such as writing and calculators, and infer that because the academy was able to adapt to these earlier technologies it will be able to adapt to AI. On this view, AI will not prevent students from developing adequate competence to do their jobs and it will not weaken their faculties. But this will require that universities adapt effectively, otherwise there might be problems.

A counter to this view is to argue that AI is different from these earlier technologies. For example, when Photoshop was created, some people worried that it would be detrimental to artistic skills by making creating and editing images too easy. But while Photoshop had a significant impact, it did not eliminate the need for skill and the more extreme of the feared consequences did not come to pass. But AI image generation, one might argue, brought these fears fully to life. When properly prompted, AI can generate images of good enough quality that human artists worry about their jobs. One could argue that AI will be able to do this (or is already doing this) broadly and students will no longer need to develop these skills, because AI will be able to do it for them (or in their place). But is this something we should fear, or just another example of technology rendering skills obsolete?

Most college graduates in the United States could not make a spear, hunt a deer and then preserve the meat without refrigeration and transform the hide into clean and comfortable clothing. While these were once essential skills for our ancestors, we would not consider college graduates weak or incompetent because they lack these skills.  Turning to more recent examples, modern college graduates would not know how to use computer punch cards or troubleshoot an AppleTalk network. But they do not need such skills, and they would not be considered incompetent for lacking them. If AI persists and fulfills some of its promise, it would be surprising if it did not render some skills obsolete. But, as always, there is the question of whether we should allow skills and knowledge to become obsolete and what we might lose if we do so.

My name is Dr. Michael LaBossiere, and I am reaching out to you on behalf of the CyberPolicy Institute at Florida A&M University (FAMU). Our team of professors, who are fellows with the Institute, have developed a short survey aimed at gathering insights from professionals like yourself in the IT and healthcare sectors regarding healthcare cybersecurity.

The purpose of The Florida A&M University Cyber Policy Institute (Cyπ) is to conduct interdisciplinary research that documents technology’s impact on society and provides leaders with reliable information to make sound policy decisions. Cyπ will help produce faculty and students who will be future experts in many areas of cyber policy. https://www.famu.edu/academics/cypi/index.php

Your expertise and experience are invaluable to us, and we believe that your participation will significantly contribute to our research paper. The survey is designed to be brief and should take no more than ten minutes to complete. Your responses will help us better understand the current security landscape and challenges faced by professionals in your field, ultimately guiding our efforts to develop effective policies and solutions for our paper. We would be happy to share our results with you.

To participate in the survey, please click on the following link: https://qualtricsxmfgpkrztvv.qualtrics.com/jfe/form/SV_8J8gn6SAmkwRO5w

We greatly appreciate your time and input. Should you have any questions or require further information, please do not hesitate to contact us at michael.labossiere@famu.edu

Thank you for your consideration and support.

Best regards,

Dr. Yohn Jairo Parra Bautista, yohn.parrabautista@famu.edu

Dr. Michael C. LaBossiere, michael.labossiere@famu.edu

Dr. Carlos Theran, carlos.theran@famu.edu

https://dukeroboticsys.com/

Taking the obvious step in done technology, Duke Robotics developed a small armed drone called the Tikad. Israel also developed a sniper drone that it is using in Gaza. These drones differ from earlier armed drones, like the Predator, in that they are small and relatively cheap. As with many other areas of technology, the main innovations are in ease of use and lower cost. This makes the small armed drones more accessible than previous drones, which is both good and bad.

On the positive side, the military and police can deploy more drones and reduce human casualties (at least for the drone users). For example, the police could send a drone in to observe and possibly engage during a hostage situation and not put officers in danger.

On the negative side, the lower cost and ease of use means that armed drones are easier to deploy by terrorists, criminals and oppressive states. The typical terrorist group cannot afford a drone like the Predator and might have difficulty in finding people who can operate and maintain such a complicated aircraft. But smaller armed drones can be operated and serviced by a broader range of people. This is not to say that Duke Robotics should be criticized for doing the obvious as people have been thinking about arming drones since drones were invented.

Inexpensive gun drones do raise the usual concerns associated with remotely operated weapons. The first is the concern that operators of drones can be more aggressive than forces that are physically present and at risk of the consequences of engaging in violence. However, it can also be argued that an operator is less likely to be aggressive because they are not in danger and the literal and metaphorical distance will allow them to respond with more deliberation. For example, a police officer operating a drone might elect to wait longer to confirm that a suspect is pulling a gun than they would if they were present. Then again, they might not as this would be a training and reaction issue with a very practical concern about training officers to delay longer when operating a drone and not delaying too long in person.

A second concern is accountability. A drone allows the operator anonymity and assigning responsibility can be difficult. In the case of the military and police, this can be addressed by having a system of accountability. After all, military and police operators would usually be known to the relevant authorities. That said, drones can be used in ways that are difficult to trace to the operator and this would be true in the case of terrorists. The use of drones would allow terrorists to attack from safety and in an anonymous manner, which are matters of concern.

However, it must be noted that while the first use of a gun armed drone in a terrorist attack would be something new, it would not be significantly different from the use of a planted bomb or other distance weapons. This is because such bombs allow terrorists to kill from a safe distance and make it harder to identify the terrorist. But, just as with bombs, the authorities would be able to investigate the attack and stand some chance of tracing a drone back to the terrorist. Drones are in some ways less worrisome than bombs as a drone can be seen and is limited in how many targets it can engage. In contrast, a bomb can be hidden and can kill many in an instant, without a chance of escape or defense.  A gun drone is also analogous in some ways to a sniper rifle in that it allows engagement at long ranges. However, the drone does afford far more range and safety than even the best sniper rifle.

In the United States, it is currently not legal to arm your drone. While the people have the right to keep and bear arms, this does not extend to operating armed drones. The NRA does not seem interested in fighting for the right to arm drones, but that could changes.

In closing, there are legitimate concerns about cheap and simple gun drones. While they will not be as radical a change as some might predict, they will make it easier and cheaper to engage in violence at a distance and in anonymous killing. As such, they will make ideal weapons for terrorists and oppressive governments. However, they do offer the possibility of reduced human casualties, if used responsibly. In any case, their deployment is inevitable, so the meaningful questions are about how they should be used and how to defend against their misuse. The question about whether they should be used is morally interesting, but pragmatically irrelevant since are being used.

Since the US is experiencing a drone panic as this is being written, I’ll close with a few rational points. First, of course people are seeing drones. As comedians have pointed out, you can buy them at Walmart. Drones are everywhere. Second, people are regularly mistaking planes and even stars for drones. Third, as has been pointed out and as should be obvious, if a foreign power were secretly operating drones in the US, then they would turn the lights off. Fourth, no harm seems to have been done by the drones, so it is a panic over nothing. But it is reasonable to be concerned with what drones are being used for as corporations and the state are not always acting for the public good.

 

The term “robot” and the idea of a robot rebellion were introduced by Karel Capek in Rossumovi Univerzální Roboti. “Robot” is derived from the Czech term for “forced labor” which was itself based on a term for slavery. Robots and slavery are thus linked in science-fiction. This leads to a philosophical question: can a machine be a slave? Sorting this matter out requires an adequate definition of slavery followed by determining whether the definition can fit a machine.

In simple terms, slavery is the ownership of a person by another person. While slavery is often seen in absolute terms (one is either enslaved or not), there are degrees of slavery in that the extent of ownership can vary. For example, a slave owner might grant their slaves some free time or allow them some limited autonomy. This is analogous to being ruled under a political authority in that there are degrees of being ruled and degrees of freedom under that rule.

Slavery is also often characterized in terms of forcing a person to engage in uncompensated labor. While this account does have some appeal, it is flawed. After all, it could be claimed that slaves are compensated by being provided with food, shelter and clothing. Slaves are sometimes even paid wages and there are cases in which slaves have purchased their own freedom using these wages. The Janissaries of the Ottoman Empire were slaves yet were paid and enjoyed a socioeconomic status above many of the free subjects of the empire.  As such, compelled unpaid labor is not the defining quality of slavery. However, it is intuitively plausible to regard compelled unpaid labor as a form of slavery in that the compeller purports to own the laborer’s time without consent or compensation.

Slaves are also often presented as powerless and abused, but this is not always the case. For example, the slave soldier Mamluks were treated as property that could be purchased, yet  enjoyed considerable status and power. The Janissaries, as noted above, also enjoyed considerable influence and power. There are free people who are powerless and routinely abused. Thus, being powerless and abused is neither necessary nor sufficient for slavery. As such, the defining characteristic of slavery is the claiming of ownership; that the slave is property.

Obviously, not all forms of ownership are slavery. My running shoes are not enslaved by me, nor is my smartphone. This is because shoes and smartphones lack the moral status required to be considered enslaved. The matter becomes more controversial when it comes to animals.

Most people accept that humans have the right to own animals. For example, a human who has a dog or cat is referred to as the pet’s owner. But there are people who take issue with the ownership of animals. While some philosophers, such as Kant and Descartes, regard animals as objects, other philosophers argue they have moral status. For example, some utilitarians accept that the capacity of animals to feel pleasure and pain grants them moral status. This is typically taken as a status that requires their suffering be considered rather than one that morally forbids their being owned. That is, it is seen as morally acceptable to own animals if they are treated well. There are even people who consider any ownership of animals to be wrong but their use of the term “slavery” for the ownership of animals seems more metaphorical than a considered philosophical position.

While I think that treating animals as property is morally wrong, I would not characterize the ownership of most animals as slavery. This is because most animals lack the status required to be enslaved. To use an analogy, denying animals religious freedom, the freedom of expression, the right to vote and so on does not oppress animals because they are not the sort of beings that can exercise these rights. This is not to say that animals cannot be wronged, just that their capabilities limit the wrongs that can be done to them. So, while an animal can be wronged by being cruelly confined, it cannot be wronged by denying it freedom of religion.

People, because of their capabilities, can be enslaved. This is because the claim of ownership over them is a denial of their rightful status. The problem is working out exactly what it is to be a person and this is something that philosophers have struggled with since the origin of the idea of persons. Fortunately, I do not need to provide such a definition when considering whether machines can be enslaved and can rely on an analogy to make my case.

While I believe that other humans are (usually) people, thanks to the problem of other minds I do not know that they are really people. Since I have no epistemic access to their (alleged) thoughts and feelings, I do not know if they have the qualities needed to be people or if they are just mindless automatons exhibiting an illusion of the personhood that I possess. Because of this, I must use an argument by analogy: these other beings act like I do, I am a person, so they are also people. To be consistent, I need to extend the same reasoning to beings that are not humans, which would include machines. After all, without cutting open the apparent humans I meet, I have no idea whether they are organic beings or machines. So, the mere appearance of being organic or mechanical is not relevant, I must judge by how the entity functions. For all I know, you are a machine. For all you know, I am a machine. Yet it seems reasonable to regard both of us as people.

While machines can engage in some person-like behavior now, they cannot yet pass this analogy test. That is, they cannot consistently exhibit the capacities exhibited by a known person, namely me. However, this does not mean that machines could never pass this test. That is, behave in ways that would be sufficient to be accepted as a person if it that behavior was done by an organic human.

A machine that could pass this test would merit being regarded as a person in the same way that humans passing this test merit this status. As such, if a human person can be enslaved, then a robot person could also be enslaved.

It is, of course, tempting to ask if a robot with such behavior would really be a person. The same question can be asked about humans, thanks to that problem of other minds.