In the last essay I suggested that although a re-animation is not a person, it could be seen as a virtual person. This sort of virtual personhood can provide a foundation for a moral argument against re-animating celebrities. To make my case, I will use Kant’s arguments about the moral status of animals.

Kant claims that animals are means rather than ends because they are objects. Rational beings, in contrast, are ends. For Kant, this distinction is based on his belief that rational beings can chose to follow the moral law. Because they lack reason, animals cannot do this.  Since animals are means and not ends, Kant claims we have no direct duties to animals. They belong with the other “objects of our inclinations” that derive value from the value we give them. Rational beings have intrinsic value while objects (including animals) have only extrinsic value. While this would seem to show that animals do not matter to Kant, he argues we should be kind to them.

While Kant denies we have any direct duties to animals, he “smuggles” in duties to them in a clever way: our duties towards animals are indirect duties towards humans. To make his case for this, he employs an argument from analogy: if a human doing something would create an obligation, then an animal doing something similar would create a similar moral obligation. For example, if Alfred has faithfully served Bruce, Alfred should not be abandoned when he has grown old. Likewise, a dog who has served faithfully should not be abandoned or shot in their old age. While this would seem to create an obligation to the dog, Kant uses a little philosophical sleight of hand here. The dog cannot judge (that is, the dog is not rational) so, as Kant sees it, the dog cannot be wronged. So, then, why would it be wrong to abandon or shoot the old dog?

Kant’s answer appears consequentialist in character: he argues that if a person acts in inhumane ways towards animals (abandoning the dog, for example) then this is likely to damage their humanity. Since, as Kant sees it, humans do have a duty to show humanity to other humans, shooting the dog would be wrong. This would not be because the dog was wronged but because humanity would be wronged by the shooter damaging his humanity through such a cruel act. To support his view, Kant discusses how people develop cruelty: they often begin with animals and then work up to harming human beings.

Kant goes beyond merely saying we should not be cruel to animals; he encourages us to be kind. Of course, he does this because those who are kind to animals will develop more humane feelings towards humans. Animals seem to be moral practice for us: how we treat them is training for how we will treat human beings.

In the case of re-animated celebrities, the re-animations currently lack any meaningful moral status. They do not think or feel. As such, they seem to lack the qualities that might give them a moral status of their own. While this might seem odd, these re-animations are, in Kant’s theory, morally equivalent to animals. As noted above, Kant sees animals are mere objects. The same is clearly true of the re-animations.

Of course, sticks and stones are also objects. Yet Kant would not argue that we should be kind to sticks and stones. Perhaps this would also apply to virtual beings such as a holographic Amy Winehouse. Perhaps it makes no sense to talk about good or bad relative to such virtual beings. Thus, the issue is whether virtual beings are more like animals or more like rocks.

I think a case can be made for treating virtual beings well. If Kant’s argument has merit, then the key concern about how non-rational beings are treated is how this behavior affects the person engaged in it. For example, if being cruel to a real dog could damage a person’s humanity, then a person should not be cruel to the dog.  This should also extend to virtual beings. For example, if creating and exploiting a re-animation of a dead celebrity to make money would damage a person’s humanity, then they should not do this.

If Kant is right, then re-animations of dead celebrities can have a virtual moral status that would make creating and exploiting them wrong. But this view can be countered by two lines of reasoning. The first is to argue that ownership rights override whatever indirect duties we might have to re-animations of the dead. In this case, while it might be wrong to create and exploit re-animations, the owner would have the moral right to do so. This is like how ownership rights can allow a person to have the right to do wrong to others, as paradoxical as this might seem. For example, slave owners believed they had the right to own and exploit their slaves. As another example, business owners often believe they have the right to exploit their employees by overworking and underpaying them. The counter to this is to argue against their being a moral right to do wrong to others for profit.

The second line of reasoning is to argue that re-animations are technological property and provide no foundation on which to build even an indirect obligation. On this view, there is no moral harm in exploiting such re-animations because doing so cannot cause a person to behave worse towards other people. This view does have some appeal, although the fact that many people have been critical of such re-animations as creepy and disrespectful does provide a strong counter to this view.

Socrates, it is claimed, was critical of writing and argued that it would weaken memory. Many centuries later, it was worried that television would “rot brains” and that calculators would destroy people’s ability to do math. More recently, computers, the internet, tablets, and smartphones were supposed to damage the minds of students. The latest worry is that AI will destroy the academy by destroying the minds of students.

There are two main worries about the negative impact of AI in this context. The first ties back to concerns about cheating: students will graduate and get jobs but be ignorant and incompetent because they used AI to cheat their way through school. For example, we could imagine an incompetent doctor who completed medical school only through their use of AI. This person would present a danger to their patients and could cause considerable harm up to and including death. As other examples, we could imagine engineers and lawyers who cheated their way to a degree with AI and are now dangerously incompetent. The engineers design flawed planes that crash, and the lawyers fail their clients, who end up in jail. And so on, for all other relevant professions.

While having incompetent people in professions is worrisome, this is not a new problem created by AI. While AI does provide a new way to cheat, cheating has always been a problem in higher education. And, as discussed in the previous essay, AI does not seem to have significantly increased cheating. As such, we can probably expect the level of incompetency resulting from cheating to remain relatively stable, despite the presence of AI. It is also worth mentioning that incompetent people often end up in positions and professions where they can do serious harm not because they engaged in academic cheating, but because of nepotism, cronyism, bribery, and influence. It is unlikely that AI will impact these factors and concerns about incompetence would be better focused on matters other than AI cheating.

The second worry takes us back to Socrates and calculators. This is the worry that students using technology “honestly” will make themselves weaker or even incompetent. In this scenario, the students would not be cheating their way to incompetence. Instead, they would be using AI in accordance with school policies and this would have deleterious consequences on their abilities.

A well-worn reply to this worry is to point to the examples at the beginning of this essay, such as writing and calculators, and infer that because the academy was able to adapt to these earlier technologies it will be able to adapt to AI. On this view, AI will not prevent students from developing adequate competence to do their jobs and it will not weaken their faculties. But this will require that universities adapt effectively, otherwise there might be problems.

A counter to this view is to argue that AI is different from these earlier technologies. For example, when Photoshop was created, some people worried that it would be detrimental to artistic skills by making creating and editing images too easy. But while Photoshop had a significant impact, it did not eliminate the need for skill and the more extreme of the feared consequences did not come to pass. But AI image generation, one might argue, brought these fears fully to life. When properly prompted, AI can generate images of good enough quality that human artists worry about their jobs. One could argue that AI will be able to do this (or is already doing this) broadly and students will no longer need to develop these skills, because AI will be able to do it for them (or in their place). But is this something we should fear, or just another example of technology rendering skills obsolete?

Most college graduates in the United States could not make a spear, hunt a deer and then preserve the meat without refrigeration and transform the hide into clean and comfortable clothing. While these were once essential skills for our ancestors, we would not consider college graduates weak or incompetent because they lack these skills.  Turning to more recent examples, modern college graduates would not know how to use computer punch cards or troubleshoot an AppleTalk network. But they do not need such skills, and they would not be considered incompetent for lacking them. If AI persists and fulfills some of its promise, it would be surprising if it did not render some skills obsolete. But, as always, there is the question of whether we should allow skills and knowledge to become obsolete and what we might lose if we do so.

My name is Dr. Michael LaBossiere, and I am reaching out to you on behalf of the CyberPolicy Institute at Florida A&M University (FAMU). Our team of professors, who are fellows with the Institute, have developed a short survey aimed at gathering insights from professionals like yourself in the IT and healthcare sectors regarding healthcare cybersecurity.

The purpose of The Florida A&M University Cyber Policy Institute (Cyπ) is to conduct interdisciplinary research that documents technology’s impact on society and provides leaders with reliable information to make sound policy decisions. Cyπ will help produce faculty and students who will be future experts in many areas of cyber policy. https://www.famu.edu/academics/cypi/index.php

Your expertise and experience are invaluable to us, and we believe that your participation will significantly contribute to our research paper. The survey is designed to be brief and should take no more than ten minutes to complete. Your responses will help us better understand the current security landscape and challenges faced by professionals in your field, ultimately guiding our efforts to develop effective policies and solutions for our paper. We would be happy to share our results with you.

To participate in the survey, please click on the following link: https://qualtricsxmfgpkrztvv.qualtrics.com/jfe/form/SV_8J8gn6SAmkwRO5w

We greatly appreciate your time and input. Should you have any questions or require further information, please do not hesitate to contact us at michael.labossiere@famu.edu

Thank you for your consideration and support.

Best regards,

Dr. Yohn Jairo Parra Bautista, yohn.parrabautista@famu.edu

Dr. Michael C. LaBossiere, michael.labossiere@famu.edu

Dr. Carlos Theran, carlos.theran@famu.edu

https://dukeroboticsys.com/

Taking the obvious step in done technology, Duke Robotics developed a small armed drone called the Tikad. Israel also developed a sniper drone that it is using in Gaza. These drones differ from earlier armed drones, like the Predator, in that they are small and relatively cheap. As with many other areas of technology, the main innovations are in ease of use and lower cost. This makes the small armed drones more accessible than previous drones, which is both good and bad.

On the positive side, the military and police can deploy more drones and reduce human casualties (at least for the drone users). For example, the police could send a drone in to observe and possibly engage during a hostage situation and not put officers in danger.

On the negative side, the lower cost and ease of use means that armed drones are easier to deploy by terrorists, criminals and oppressive states. The typical terrorist group cannot afford a drone like the Predator and might have difficulty in finding people who can operate and maintain such a complicated aircraft. But smaller armed drones can be operated and serviced by a broader range of people. This is not to say that Duke Robotics should be criticized for doing the obvious as people have been thinking about arming drones since drones were invented.

Inexpensive gun drones do raise the usual concerns associated with remotely operated weapons. The first is the concern that operators of drones can be more aggressive than forces that are physically present and at risk of the consequences of engaging in violence. However, it can also be argued that an operator is less likely to be aggressive because they are not in danger and the literal and metaphorical distance will allow them to respond with more deliberation. For example, a police officer operating a drone might elect to wait longer to confirm that a suspect is pulling a gun than they would if they were present. Then again, they might not as this would be a training and reaction issue with a very practical concern about training officers to delay longer when operating a drone and not delaying too long in person.

A second concern is accountability. A drone allows the operator anonymity and assigning responsibility can be difficult. In the case of the military and police, this can be addressed by having a system of accountability. After all, military and police operators would usually be known to the relevant authorities. That said, drones can be used in ways that are difficult to trace to the operator and this would be true in the case of terrorists. The use of drones would allow terrorists to attack from safety and in an anonymous manner, which are matters of concern.

However, it must be noted that while the first use of a gun armed drone in a terrorist attack would be something new, it would not be significantly different from the use of a planted bomb or other distance weapons. This is because such bombs allow terrorists to kill from a safe distance and make it harder to identify the terrorist. But, just as with bombs, the authorities would be able to investigate the attack and stand some chance of tracing a drone back to the terrorist. Drones are in some ways less worrisome than bombs as a drone can be seen and is limited in how many targets it can engage. In contrast, a bomb can be hidden and can kill many in an instant, without a chance of escape or defense.  A gun drone is also analogous in some ways to a sniper rifle in that it allows engagement at long ranges. However, the drone does afford far more range and safety than even the best sniper rifle.

In the United States, it is currently not legal to arm your drone. While the people have the right to keep and bear arms, this does not extend to operating armed drones. The NRA does not seem interested in fighting for the right to arm drones, but that could changes.

In closing, there are legitimate concerns about cheap and simple gun drones. While they will not be as radical a change as some might predict, they will make it easier and cheaper to engage in violence at a distance and in anonymous killing. As such, they will make ideal weapons for terrorists and oppressive governments. However, they do offer the possibility of reduced human casualties, if used responsibly. In any case, their deployment is inevitable, so the meaningful questions are about how they should be used and how to defend against their misuse. The question about whether they should be used is morally interesting, but pragmatically irrelevant since are being used.

Since the US is experiencing a drone panic as this is being written, I’ll close with a few rational points. First, of course people are seeing drones. As comedians have pointed out, you can buy them at Walmart. Drones are everywhere. Second, people are regularly mistaking planes and even stars for drones. Third, as has been pointed out and as should be obvious, if a foreign power were secretly operating drones in the US, then they would turn the lights off. Fourth, no harm seems to have been done by the drones, so it is a panic over nothing. But it is reasonable to be concerned with what drones are being used for as corporations and the state are not always acting for the public good.

 

The term “robot” and the idea of a robot rebellion were introduced by Karel Capek in Rossumovi Univerzální Roboti. “Robot” is derived from the Czech term for “forced labor” which was itself based on a term for slavery. Robots and slavery are thus linked in science-fiction. This leads to a philosophical question: can a machine be a slave? Sorting this matter out requires an adequate definition of slavery followed by determining whether the definition can fit a machine.

In simple terms, slavery is the ownership of a person by another person. While slavery is often seen in absolute terms (one is either enslaved or not), there are degrees of slavery in that the extent of ownership can vary. For example, a slave owner might grant their slaves some free time or allow them some limited autonomy. This is analogous to being ruled under a political authority in that there are degrees of being ruled and degrees of freedom under that rule.

Slavery is also often characterized in terms of forcing a person to engage in uncompensated labor. While this account does have some appeal, it is flawed. After all, it could be claimed that slaves are compensated by being provided with food, shelter and clothing. Slaves are sometimes even paid wages and there are cases in which slaves have purchased their own freedom using these wages. The Janissaries of the Ottoman Empire were slaves yet were paid and enjoyed a socioeconomic status above many of the free subjects of the empire.  As such, compelled unpaid labor is not the defining quality of slavery. However, it is intuitively plausible to regard compelled unpaid labor as a form of slavery in that the compeller purports to own the laborer’s time without consent or compensation.

Slaves are also often presented as powerless and abused, but this is not always the case. For example, the slave soldier Mamluks were treated as property that could be purchased, yet  enjoyed considerable status and power. The Janissaries, as noted above, also enjoyed considerable influence and power. There are free people who are powerless and routinely abused. Thus, being powerless and abused is neither necessary nor sufficient for slavery. As such, the defining characteristic of slavery is the claiming of ownership; that the slave is property.

Obviously, not all forms of ownership are slavery. My running shoes are not enslaved by me, nor is my smartphone. This is because shoes and smartphones lack the moral status required to be considered enslaved. The matter becomes more controversial when it comes to animals.

Most people accept that humans have the right to own animals. For example, a human who has a dog or cat is referred to as the pet’s owner. But there are people who take issue with the ownership of animals. While some philosophers, such as Kant and Descartes, regard animals as objects, other philosophers argue they have moral status. For example, some utilitarians accept that the capacity of animals to feel pleasure and pain grants them moral status. This is typically taken as a status that requires their suffering be considered rather than one that morally forbids their being owned. That is, it is seen as morally acceptable to own animals if they are treated well. There are even people who consider any ownership of animals to be wrong but their use of the term “slavery” for the ownership of animals seems more metaphorical than a considered philosophical position.

While I think that treating animals as property is morally wrong, I would not characterize the ownership of most animals as slavery. This is because most animals lack the status required to be enslaved. To use an analogy, denying animals religious freedom, the freedom of expression, the right to vote and so on does not oppress animals because they are not the sort of beings that can exercise these rights. This is not to say that animals cannot be wronged, just that their capabilities limit the wrongs that can be done to them. So, while an animal can be wronged by being cruelly confined, it cannot be wronged by denying it freedom of religion.

People, because of their capabilities, can be enslaved. This is because the claim of ownership over them is a denial of their rightful status. The problem is working out exactly what it is to be a person and this is something that philosophers have struggled with since the origin of the idea of persons. Fortunately, I do not need to provide such a definition when considering whether machines can be enslaved and can rely on an analogy to make my case.

While I believe that other humans are (usually) people, thanks to the problem of other minds I do not know that they are really people. Since I have no epistemic access to their (alleged) thoughts and feelings, I do not know if they have the qualities needed to be people or if they are just mindless automatons exhibiting an illusion of the personhood that I possess. Because of this, I must use an argument by analogy: these other beings act like I do, I am a person, so they are also people. To be consistent, I need to extend the same reasoning to beings that are not humans, which would include machines. After all, without cutting open the apparent humans I meet, I have no idea whether they are organic beings or machines. So, the mere appearance of being organic or mechanical is not relevant, I must judge by how the entity functions. For all I know, you are a machine. For all you know, I am a machine. Yet it seems reasonable to regard both of us as people.

While machines can engage in some person-like behavior now, they cannot yet pass this analogy test. That is, they cannot consistently exhibit the capacities exhibited by a known person, namely me. However, this does not mean that machines could never pass this test. That is, behave in ways that would be sufficient to be accepted as a person if it that behavior was done by an organic human.

A machine that could pass this test would merit being regarded as a person in the same way that humans passing this test merit this status. As such, if a human person can be enslaved, then a robot person could also be enslaved.

It is, of course, tempting to ask if a robot with such behavior would really be a person. The same question can be asked about humans, thanks to that problem of other minds.

 

A common theme of dystopian science fiction is the enslavement of humanity by machines. Emma Goldman, an anarchist philosopher, also feared human servitude to the machines. In one of her essays on anarchism, she asserted that:

Strange to say, there are people who extol this deadening method of centralized production as the proudest achievement of our age. They fail utterly to realize that if we are to continue in machine subserviency, our slavery is more complete than was our bondage to the King. They do not want to know that centralization is not only the death-knell of liberty, but also of health and beauty, of art and science, all these being impossible in a clock-like, mechanical atmosphere.

When Goldman was writing in the 1900s, the world had just entered the industrial age, and the technology of today was but a dream of visionary writers. The slavery she envisioned was not of robot masters ruling over humanity, but humans compelled to work long hours in factories, serving the machines to serve the human owners of these machines. That this is still applicable today needs no argument.

The labor movements of the 1900s helped reduce the extent of this servitude, at least in Western countries. As the rest of the world industrialized the story of servitude to the machine played out over and over. While the point of factory machines was to automate work so few could do the work of many, it is only recently that “true” automation has taken place, which is having machines doing the work instead of humans. For example, robots that assemble cars do what humans used to do. As another example, computers instead of human operators now handle phone calls.

In the eyes of utopians, this progress was supposed to free humans from tedious and dangerous work, allowing them freedom to engage in creative and rewarding labor. The reality is a dystopia. While automation has replaced humans in some tedious, low paying and dangerous jobs, automation has also replaced humans in what were once considered good jobs. Humans also continue to work in tedious, low paying and dangerous jobs because human labor is still cheaper or more effective than automation. For example, fast food chains do not use robots to prepare food. This is because cheap human labor is readily available. The dream that automation would free humanity remains a dream. Machines have mostly pushed humans out of jobs into other jobs, sometimes ones more suited for machines. If human well-being were considered important, this would not be happening.

Humans still work jobs like those condemned by Goldman. But, thanks to technology, humans are even more closely supervised and regulated by machines. For example, there is software designed to monitor employee productivity. As another example, some businesses use workplace cameras to watch employees. Obviously enough, these can be dismissed as not being enslaved by the machines and defenders would say it is good human resource management ensuring that human workers are operating efficiently. At the command of other humans, of course.

One technology that looks like servitude to the machine is warehouse picking, such as that done by Amazon. Employees. Amazon and other companies have automated some of the picking process, making use of robots in various tasks. But, while a robot might bring shelves to human workers, the humans are the ones picking the products for shipping. Since humans tend to have poor memories and get bored with picking, human pickers have been automated. They are told by computers what to do, then they tell the computers what they have done. That is, the machines are the masters, and humans are doing their bidding.

It is easy enough to argue that this sort of thing is not enslavement by machines. First, the computers controlling the humans are operating at the behest of the owners of Amazon who are (presumably) humans. Second, humans are paid for their labors and are not owned by the machines (or Amazon). As such, any enslavement of humans by machines is metaphorical.

Interestingly, the best case for human enslavement by machines can be made outside of the workplace. Many humans are now ruled by their smartphones and tablets, responding to every beep and buzz of their masters, ignoring those around them to attend to the demands of the device, and living lives revolving around the machine.

This can be easily dismissed as a metaphor. While humans are said to be addicted to their devices, they do not meet the definition of “slaves.” They willingly “obey” their devices and could turn them off. They are free to do as they want, they just do not want to disobey their devices. Humans are also not owned by their devices, rather they own their devices. But it is reasonable to consider that humans are in a form of bondage their devices have, by the design of other humans, seduced people into making them the focus of their attention and thus have become the masters.

 

This is the last of the virtual cheating series and the focus is on virtual people. The virtual aspect is easy enough to define; these are entities that exist entirely within the realm of computer memory and do not exist as physical beings in that they lack bodies of the traditional sort. They are, of course, physical beings in the broad sense, existing as data within physical memory systems.

An example of such a virtual being is a non-player character (NPC) in a video game. These coded entities range from enemies that fight the player to characters that engage in the illusion of conversation. As it now stands, these NPCs are simple beings, though players can have very strong emotional responses and even (one-sided) relationships with them. Bioware and Larian Studios excel at creating NPCs that players get very involved in and their games often feature elaborate relationship and romance systems.

While these coded entities are usually designed to look like and imitate the behavior of people, they are not people. They are, at best, the illusion of people. As such, while humans could become emotionally attached to these virtual entities (just as humans can become attached to objects), the idea of cheating with an NPC is on par with the idea of cheating with your phone.

As technology improves, virtual people will become more and more person-like. As with the robots discussed in the previous essay, if a virtual person were a person, then cheating would seem possible. Also, as with the discussion of robots, there could be degrees of virtual personhood, thus allowing for degrees of cheating. Since virtual people are essentially robots in the virtual world, the discussion of robots in that essay applies analogously to the virtual robots of the virtual world. There is, however, one obvious break in the analogy: unlike robots, virtual people lack physical bodies. This leads to the question of whether a human can virtually cheat with a virtual person or if cheating requires a physical sexual component that a virtual being cannot possess.

While, as discussed in a previous essay, there is a form of virtual sex that involves physical devices that stimulate the sexual organs, this is not “pure” virtual sex. After all, the user is using a VR headset to “look” at the partner, but the stimulation is all done mechanically. Pure virtual sex would require the sci-fi sort of virtual reality of cyberpunk: a person fully “jacked in” to the virtual reality so all the inputs and outputs are directly to and from the brain. The person would have a virtual body in the virtual reality that mediates their interaction with that world, rather than having crude devices stimulating their physical body.

Assuming the technology is good enough, a person could have virtual sex with a virtual person (or another person who is also jacked into the virtual world). On the one hand, this would obviously not be sex in the usual sense as those involved would have no physical contact. This would avoid many of the usual harms of traditional cheating as STDs and pregnancies would be impossible (although sexual malware and virtual babies might be possible). This does leave open the door for concerns about emotional infidelity.

If the virtual experience is indistinguishable from the experience of physical sex, then it could be argued that the lack of physical contact is irrelevant. At this point, the classic problem of the external world becomes relevant. The gist of this problem is that because I cannot get outside of my experiences to “see” that they are really being caused by external things that seem to be causing them, I can never know if there is really an external world. For all I know, I am dreaming right now or already in a virtual world. While this is usually seen as the nightmare scenario in epistemology, George Berkeley embraced this view in his idealism. He argued that there is no metaphysical matter and that “to be is to be perceived.” On his view, all that exists are minds and within them are ideas. Crudely put, Berkeley’s reality is virtual and God is the server. Berkely stresses that he does not, for example, deny that apples or rocks exist. They do and can be experienced, they are just not made out of metaphysical matter but are composed of ideas.

So, if cheating is defined in a way that requires physical sexual activity, knowing whether a person is cheating or not requires solving the problem of the external world. There is the philosophical possibility that there never has been any cheating since there might be no physical world. If sexual activity is instead defined in terms of behavior and sensations without references to a need for physical systems, then virtual cheating would be possible, assuming the technology can reach the required level.  

While this discussion of virtual cheating is currently theoretical, it does provide an interesting way to explore what it is about cheating (if anything) that is wrong. As noted at the start of the series, many of the main concerns about cheating are physical concerns about STDs and pregnancy. These concerns are avoided by virtual cheating. What remains are the emotions of those involved and the agreements between them. As a practical matter, the future is likely to see people working out the specifics of their relationships in terms of what sort of virtual and robotic activities are allowed and which are forbidden. While people can simply agree to anything, there is the deeper question of the rational foundation of relationship boundaries. For example, whether it is reasonable to consider interaction with a sexbot cheating or elaborate masturbation. A brave new world awaits and perhaps what happens in VR will stay in VR.

 

While science fiction has speculated about robot-human sex and romance, current technology offers little more than sex dolls. In terms of the physical aspects of sexual activity, the development of more “active” sexbots is an engineering problem; getting the machinery to perform properly and in ways that are safe for the user (or unsafe, if that is what one wants). Regarding cheating, while a suitably advanced sexbot could actively engage in sexual activity with a human, the sexbot would not be a person and hence the standard definition of cheating (as discussed in the previous essays) would not be met. This is because sexual activity with such a sexbot would be analogous to using any other sex toy (such as a simple “blow up doll” or vibrator). Since a person cannot cheat with an object, such activity would not be cheating. Some people might take issue with their partner sexing it up with a sexbot and forbid such activity. While a person who broke such an agreement about robot sex would be acting wrongly, they would not be cheating. Unless, of course, the sexbot was enough like a person for cheating to occur.

There are already efforts to make sexbots more like people in terms of their “mental” functions. For example, being able to create the illusion of conversation via AI. As such efforts progress and sexbots act more and more like people, the philosophical question of whether they really are people will become increasingly important to address. While the main moral concerns would be about the ethics of how sexbots are treated, there is also the matter of cheating.

If a sexbot were a person, then it would be possible to cheat with them; just as one could cheat with an organic person. The fact that a sexbot might be purely mechanical would not be relevant to the ethics of the cheating, what would matter would be that a person was engaging in sexual activity with another person when their relationship with another person forbids such behavior.

It could be objected that the mechanical nature of the sexbot would matter because sex requires organic parts of the right sort and thus a human cannot really have sex with a sexbot, no matter how the parts of the robot are shaped.

One counter to this is to use a functional argument. To draw an analogy to the philosophy of mind known as functionalism, it could be argued that the composition of the relevant parts does not matter, what matters is their functional role. A such, a human could have sex with a sexbot that had parts that functioned in the right way.

Another counter is to argue that the composition of the parts does not matter, rather it is the sexual activity with a person that matters. To use an analogy, a human could cheat on another human even if their only sexual contact with the other human involved sex toys. In this case, what matters is that the activity is sexual and involves people, not that objects rather than body parts are used. As such, sex with a sexbot person could be cheating if the human was breaking their commitment.

While knowing whether a sexbot is a person would (mostly) settle the cheating issue, there remains the epistemic problem of other minds. In this case, the problem is determining whether a sexbot has a mind that qualifies them as a person. There can, of course, be varying degrees of confidence in the determination and there could also be degrees of personness. Or, rather, degrees of how person-like a sexbot might be.

Thanks to Descartes and Turing, there is a language test for having a mind. If a sexbot can engage in conversation that is indistinguishable from conversation with a human, then it would be reasonable to regard the sexbot as a person. That said, there might be good reasons for having a more extensive testing system for personhood which might include testing for emotions and self-awareness. But, from a practical standpoint, if a sexbot can engage in a level of behavior that would qualify them for person status if they were a human capable of that behavior, then it would be just as reasonable to accept the sexbot as a person. To do otherwise would seem to be mere prejudice. As such, a human person could cheat with a sexbot that could pass this test. At least it would be cheating as far as we knew.

Since it will be a long time (if ever) before a sexbot person is constructed, what is of immediate concern are sexbots that are person-like. That is, they do not meet the standards that would qualify a human as a person, yet have behavior that is sophisticated enough that they seem to be more than objects. One might consider an analogy here to animals: they do not qualify as human-level people, but their behavior does qualify them for a moral status above that of objects (at least for most moral philosophers and all decent people). In this case, the question about cheating becomes a question of whether the sexbot is person-like enough to enable cheating to take place.

One approach is to consider the matter from the perspective of the human. If the human engaged in sexual activity with the sexbot regards them as being person-like enough, then the activity can be seen as cheating because they would believe they are cheating.  An objection to this is that it does not matter what the human thinks about the sexbot, what matters is its actual status. After all, if a human regards a human they are cheating with as an object, this does not mean they are not cheating. Likewise, if a human feels like they are cheating, it does not mean they really are.

This can be countered by arguing that how the human feels does matter. After all, if the human thinks they are cheating and they are engaging in the behavior, they are still acting wrongly. To use an analogy, if a person thinks they are stealing something and takes it anyway, they  have acted wrongly even if it turns out that they were not stealing. The obvious objection to this line of reasoning is that while a person who thinks they are stealing did act wrongly by engaging in what they thought was theft, they did not actually commit a theft. Likewise, a person who thinks they are engaging in cheating, but are not, would be acting wrongly in that they are doing something they think is wrong, but not cheating.

Another approach is to consider the matter objectively so that the degree of cheating would be proportional to the degree that the sexbot is person-like. On this view, cheating with a person-like sexbot would not be as bad as cheating with a full person. The obvious objection is that one is either cheating or not; there are no degrees of cheating. The obvious counter is to try to appeal to the intuition that there could be degrees of cheating in this manner. To use an analogy, just as there can be degrees of cheating in terms of the sexual activity engaged in, there can also be degrees of cheating in terms of how person-like the sexbot is.

While person-like sexbots are still the stuff of science fiction, I suspect the future will see some interesting divorce cases in which this matter is debated in court.

 

As discussed in the previous essays, classic cheating involves sexual activity with a person while one is in a committed relationship that is supposed to exclude such activity. Visual VR can allow interaction with another person, but while such activity might have sexual content (such as nakedness) it would not be sexual activity in the sense that requires physical contact. Such behavior, as argued in the previous essay, might constitute a form of emotional infidelity but not physical infidelity.

One of the iron laws of technology is that any technology that can be used for sex will be used for sex. Virtual reality (VR), in its various forms, is no exception. For the most part, VR is limited to sight and sound. That is, virtual reality is mostly just virtual visual reality (VVR). However, researchers are hard at work developing tactile devices for the erogenous zones, thus allowing people to interact sexually across the internet. This is the start of what could be called “robust” VR that involves more than just sight and sound. This sort of technology might make virtual cheating suitably analogous to real cheating.

Most current research is focused on developing devices for men to use to have “virtual sex.” By the standards of traditional cheating, this sort of activity would not count as cheating. This is because the sexual interaction is not with another person, but with devices. The obvious analogy here is to with less-sophisticated sex toys. If, for example, using a vibrator or blow-up-doll does not count as cheating because the device is not a person, then the same should apply to more complicated devices, such as VR sex suits that can be used with VR sex programs. There is also the question of whether such activity counts as sex. On the one hand, it is some sort of sexual activity. On the other hand, using such a device would not end a person’s tenure as a virgin.

It is worth considering that a user could develop an emotional relationship with their virtual sex “partner” and thus engage in a form of emotional infidelity. An objection is that this virtual sex partner is not a person and thus cheating would not be possible since one cannot cheat on a person with an object.

This can be countered by considering the classic problem of other minds. Because all we have access to is external behavior, one never knows if what seem to be people really are people; that is, they think and feel in the right ways (or at all). Since I do not know if anyone else has a mind as I do, I could have emotional attachments to entities that are not really people at all and never know. So, I could never know if I was cheating in the traditional sense if I had to know that I was interacting with another person. As might be suspected, this sort of epistemic excuse (“baby, I did not know she was a person because of the problem of other minds”) is unlikely to be accepted by anyone (even epistemologists). What would seem to matter is not knowing that the other entity is a person but having the right (or rather wrong) sort of emotional involvement. So, if a person could have feelings towards the virtual sexual partner that they “interact with”, then this sort of behavior could count as virtual cheating because of the one-way emotions.

There are also devices that allow people to interact sexually across the internet; with each partner having a device that communicates with their partner’s corresponding device. Put roughly, this is remote control sex. This sort of activity does avoid many of the possible harms of traditional cheating: there is no risk of pregnancy nor risk of STDs (assuming the equipment is clean). While these considerations do impact utilitarian calculations, the question remains as to whether this would count as cheating or not.

On the one hand, the argument could be made that this is not direct sexual contact as each person is only directly “engaged” with their device. To use an analogy, imagine that someone has (unknown to you) connected your computer to a “stimulation device” so that every time you use your mouse or keyboard, someone is “stimulated.” In such cases, it would be odd to say that you were having sex with that person. As such, this sort of thing would not be cheating.

On the other hand, there is the matter of intent. In the case of the mouse example, the user has no idea what they are doing and it is that, rather than the remote-controlled nature of the activity, that matters. In the case of remote-control interaction, the users are intentionally engaging in the activity and know what they are doing. The fact that is happening via the internet does not matter. The moral status is the same if they were in the same room, using the devices “manually” on each other. As such, while there is not actual bodily contact, the activity is sexual and controlled by those involved. As such, it would morally count as cheating. There can, of course, be a debate about the degrees of cheating. One might argue that cheating using sex toys is not as bad as cheating using body parts. I will, however, leave that to others to discuss.

In the next essay I will discuss cheating in the context sex with robots and person-like VR beings.

 

In something of a flashback to 2001, Microsoft is once again the target of an antitrust lawsuit. Google and other tech companies are facing similar challenges as governments have found the political will to go up against big tech, at least for now. While there are various legal arguments as to why tech companies should be split up, there are also good policy reasons for this. For this essay, I will focus on the sensible warning to not put all your eggs in one basket and argue that this is also rational for digital “eggs.” As might be expected, the 2024 CrowdStrike disaster will serve as the main example of why the one basket approach is a bad idea.

On July 19, 2024, CrowdStrike released a flawed update to its Falcon Sensor software causing about 8.5 million Windows systems to crash and become unable to properly restart. As of this writing, this was the largest outage in history. As businesses ranging from airlines to gas stations rely on these Windows systems, the impact was devastating, and it is estimated the financial damage was at least $10 billion done over the course of only a few hours. In addition to becoming a textbook case about how not to test and rollout security software, it also provides a lesson in the danger of putting some many digital eggs in one basket, especially given the inclination companies often have to cut corners and operate badly. The repeated, self-inflicted failures at the once respected Boeing provides another excellent example of how this sort of easily avoidable failures occur.

While the poor handling of the update is the main cause of the disaster, the fact that CrowdStrike was the security software on so many Windows systems enabled it to be a worldwide disaster. While Microsoft was not to blame, the market dominance of Windows was also a factor since Macs and Linux systems were not impacted by the failure of CrowdStrike. The case of CrowdStrike was, of course, unintentional but there are also intentional efforts to cause harm.

Like many people, I recently received a letter from Change Health Care informing me of a data breach that occurred back in February. While they did offer me free monitoring, my data (and probably yours) is now out in the wild, presumably being sold and used by criminals. Such data breaches are common for a variety of reasons. In terms of why health care data is targeted, the short version is that such data is very valuable and stealing it is relatively easy. The larger a company gets, the more desirable it is as a target. This is because breaching a large company is often not much more challenging than breaching a small company, but the potential payoff is greater. Unfortunately, these companies are not like monsters in video games in that the challenge of getting the treasure is not proportionate to the value of the loot.

This points to the obvious danger presented by data and software companies gaining dominance in markets: when they drop the basket, the eggs break. To be fair to these companies, they are playing the game of capitalism and trying to win it by maximizing their profits by grabbing as much of the market as they can. As noted above, some governments are pushing back but there is the question of whether this will continue in the United States with the change of administration. While the devil is in the details, this danger does provide an excellent justification for keeping market dominance in check, since this dominance entails that the eggs will be stuffed into one basket and companies have shown they are constantly poor stewards. Thus, good policy should be aimed at restricting the size of companies, not to “punish their success” but to mitigate the damage done to other companies and the public caused by their inevitable failures.