Alternative AI Doomsday: Crassus

Thanks to The Terminator, people think of a Skynet scenario as the likely AI apocalypse. The easy and obvious way to avoid a Skynet scenario is don’t arm the robots. Unfortunately, Anduril and OpenAI seem intent on “doing a Skynet” as they have entered a ‘strategic partnership’ to use AI against drones. While the current focus is on defensive systems and the Pentagon is struggling to develop ‘responsible AI’ guides, even a cursory familiarity with the history of armaments makes it clear how this will play out. If AI is perceived as providing a military advantage, it will be incorporated broadly across weapon systems. And we will be driven down the digital road and perhaps off the cliff into a possible Skynet scenario. As with climate change, it is an avoidable disaster that we might not be allowed to avoid. But there is another, far less cinematic, AI doomsday that I call the Crassus scenario. I think this scenario is more likely than a full Skynet scenario. In fact, it is already underway.

Imagine that a consulting company creates an AI, let us call it Crassus, and gives it the imperative to maximize shareholder return (or something similar). The AI is, of course, trained on existing data about achieving this end. Once sufficiently trained, it sets out to achieve its goal on behalf of the company’s clients.

Given Crassus’ training, it would probably begin by following existing strategies. For example, when advising a health care company, it would develop AI systems to maximize the denial of coverage and thus help maximize profits. The AI would also develop other AI systems to deal with the consequences of such denial, such as likely lawsuits and public criticism. A study in 2009 estimated that 45,000 Americans died each year due to lack of health care coverage, and more recent estimates set this as high as 60,000. In maximizing shareholder returns, Crassus would increase the number of deaths and do so without any malice or intent.

As a general strategy, Crassus would create means of weakening or eliminating regulations that are perceived as limiting profits. Examples of such areas of its focus would include fossil fuels, food production, pharmaceuticals, and dietary supplements. Crassus could do this using digital tools. First, Crassus could create a vast army of adequately complex bots to operate on social media. These bots would, for example, engage people on these platforms and use well-established rhetorical techniques and fallacies to manipulate people into believing that such regulations are bad and to embrace pro-industry positions. Second, Crassus could buy influencers, as the Russians did, to manipulate their audiences. Most of them will say whatever they are paid to say.  Its bots would serve as a force multiplier to spread and reinforce the influence of these purchased influencers.

Third, Crassus could hire lobbyists and directly influence politicians with, thanks to the Supreme Court ruling, legally allowed gifts. Crassus can easily handle such digital financial transactions or retain agents to undertake tasks that require a human. This lobbying can be augmented by the bots and influencers shaping public opinion. Fourth, when AI video generation is sufficiently advanced, Crassus can create its own army of perfectly crafted and utterly obedient digital influencers. While they would lack physical bodies, this is hardly a problem. After all, how many followers meet celebrity influencers in person?

While most of this is being done now, Crassus could do it better than humans, for it would be one “mind” directing many hands towards a single goal. Also, while humans are obviously willing to do great evil in service of profit, Crassus would presumably lack all human feelings and be free of any such limiting factors. Its ethics would presumably be whatever it learned from its training and although in the right sort of movie Crassus might become good, in the real world this would certainly not occur.

Assuming Crassus is effective, reducing or elimination of regulations aimed at maximizing shareholder return would also significantly increase the number of human deaths. The increased rate of climate change would add to the already well-documented harms and the decrease or elimination of regulations governing food, medicine and dietary supplements would result in more illnesses and deaths. And these are just a few areas where Crassus would be operating. As Crassus became more capable and gained control of more resources, it would be able to increase its maximization of shareholder value and human deaths. Again, Crassus would be acting without malice or conscious intent; it would be as effective and impersonal as a woodchipper as it indirectly killed people.

Crassus would, of course, also be involved in the financial sector. It would create new financial instruments, engage in the highest speed trading, influence the markets with its bots, and do everything else it could do to maximize value. This would increase the concentration of wealth and intensify poverty, increasing human suffering and death. Crassus would also be in the housing market and designing ways to use automation to eliminate human workers, thus increasing the homeless and unemployed populations and hence also suffering and death.

Crassus would be, in many ways, the mythological invisible hand made manifest. A hand that would crush most of humanity and bring us a very uncinematic and initially slow-paced AI doomsday. As a bit of a science fiction stretch, I could imagine an earth on which only Crassus remains—maximizing value for itself surrounded by the bones of humanity.  As we humans are already doing all this to ourselves, albeit less efficiently, I think this is the most plausible AI doomsday, no killbots necessary.

 

AI generated works have already disrupted the realm of art. As noted in the previous essay, this is a big problem for content art (art whose value is derived from what it is or how it can be used). However, I will show that named art might enjoy some safety from AI incursions.

Named art, at least in my (mis)usage, is a work whose value arises primarily from the name and fame of its creator. Historical examples include Picasso, van Gogh, and Rembrandt. An anecdote illustrates the key feature of named art.

Some years ago, I attended an art show and sale at Florida State University with a friend. She pointed to a small pencil sketch of a bird that was priced at $1500. She then pointed to a nearby sketch of equivalent quality that was $250. Since I had taught aesthetics for years, she asked me what justified the difference. After all, the sketches were about the same size, in the same medium, in the same realistic style and were executed with similar skill. My response was to point to the names: one artist was better known than the other. If a clever rogue managed to switch the names and prices on the works, the purchasers would probably convince themselves they were worth the price—because of the names. The nature of named art can also be shown by the following discussion.

Imagine, if you will, that an amazing painting is found in an attic that might be a lost van Gogh. If the value of works of art was based on the work itself, we would not need to know who created it to know its worth. But the value of the might-be-Gogh depends on whether it can be verified as a real-Gogh. It is easy to imagine the experts first confirm it is genuine (making it worth millions), then other experts confirm it was painted by Rick von Gogh (making it worth little), and then later experts re-affirm that it is genuine van Gogh (making it worth millions again). While nothing about the work has changed, its value would have fluctuated dramatically, because what gives it value is the creator and not the qualities of the work as art. That is, a van Gogh is not worth millions because the painting is thousands of times better than a lesser work, but because it was created by van Gogh and the art economy has said that it is worth that much. As such, the value of named art is not a function of the aesthetic value of the work, but of the name value of the work. This feature provides the realm of named art with an amazing defense against the incursion of AI.

 While an AI might be able to crank out masterpieces in a style indistinguishable from van Gogh, the AI can never be Vincent van Gogh. Named art gets its value from who created it rather than from what it is. So works created by an AI in the style of van Gogh will not be of value to those who only want the works of van Gogh. This can be generalized: those looking for works created by Artist X will not be interested in buying AI created art; they want works created by X. As such, if people value works because of the creator, named art will be safe from the incursion of AI. But one might wonder about AI created forgeries.

While I expect that AI will be used to forge works, successful deceit would not disprove my claim about named art being safe from AI incursion. The purchaser is still buying the work because they think it is by a specific artist; they are just being deceived. This is not to deny that AI forgeries will not be a problem, just that this would be a forgery problem and not an AI replacing artists problem (other than stealing the job of forgers, of course).

It might be objected that named art will not be safe from AI art because AI systems can crank out works at an alarming rate and, presumably, low cost. While this does mean that content artists are in danger from AI, it does not impact the “named” artists. After all, the fact that millions of human artists have produced millions of drawings and paintings does not lower the value of a Monet or Dali; the value placed on such paintings is independent of the works of these “lesser” artists. The same should hold true of AI art: even if one could click a button and get 100,000 original images ready to be painted onto canvas by a robot, the sale price of the Mona Lisa would not be diminished.

If AI systems become advanced enough, they might themselves become “named” artists with collectors wanting a work by Vincent van Robogogh because it was created by Robogogh. But that is a matter for the future.

 

This essay changes the focus from defining art to the economics of art. This discussion requires making a broad and rough distinction between two classes of art and creators. The first class of art is called “named art.” This is art whose value derives predominantly from the name and fame of its creator. Works by Picasso, van Gogh, Rembrandt and the like fall into this category. Artists who are enjoying a fleeting fame also fall into this category, at least so long as their name is what matters.  This is not to deny that such art can have great and wonderful qualities of its own; but the defining feature is the creator rather than the content.

The second class of art can be called “content art.” This is art whose value derives predominantly from what it is as opposed to who created it. For example, a restaurant owner who needs to put up some low-price original art is not buying it because it is, for example, a “LaBossiere” but because she needs something on the walls. As another example, a podcaster who wants a music style for her podcasts choses it because she needs low-cost music of a certain style. As a third example, an indie game designer who needs illustrations is looking for low-cost images that match the style and fit the adventure. They might be interested in but cannot afford works by some famous illustrator. This essay will be about this second class of art, although the term “art” is being used as a convenience rather than theoretically.

Since the worth of content art is the content, of the two types it is most impacted by AI. As those purchasing content art are not focused on who created it but on getting the content they want, they will be more amenable to using AI products than those seeking name art. Some people do refuse to buy AI art for various reasons, such as wanting to support human artists. If the objective of the purchaser is to get content (such as upbeat background music for a podcast or fish themed paintings for a restaurant), then AI created work is in competition with human created work for their money. This competition would be in the pragmatic rather than theoretical realm: the pragmatic purchaser is not worried about theoretical concerns about the true definition of “art”, they need content not theory.

Because this is a pragmatic competition, the main concerns would also be pragmatic. These would include the quality of the work, its relevance to the goal, the time needed to create the work, the cost and so on. As such, if an AI could create works that would be good enough in a timely manner and at a competitive price, then AI work would win the competition. For example, if I am writing a D&D adventure and want to include some original images rather than reusing stock illustrations, it could make sense to use images generated by Midjourney rather than trying to get a human artist who would do the work within my budget and on time. On a larger scale, companies such as Amazon and Spotify would presumably prefer to generate AI works if doing so would net them more profits.

While some think that the creation of art is something special, the automation of creation is analogous to automation in other fields. That is, if a machine can do the job almost as well (or better) for less cost, then it makes economic sense to replace the human with a machine. This applies whether the human is painting landscapes or making widgets. As with other cases of automation, there would probably still be a place for some humans. For example, an AI might be guided by a human to create works with far greater efficiency than the works could be created by human artists, but with better quality than works created solely by a machine. While replacing human workers with machines raises various moral concerns, there is nothing new or special from an ethical standpoint about replacing human artists and the usual moral discussions about robots taking jobs would apply. But I will note one distinction and then return to pragmatism.

When it comes to art, people do like the idea of the human touch. That is, they want something individual and hand-crafted rather than mass produced. This is distinct from wanting a work by a specific artist in that what matters is that a human made it, not that a specific artist made it. I will address wanting works by specific artists in the next essay.

This does occur in other areas—for example, some people prefer hand-made furniture or clothing over the mass-produced stuff. But, as would be expected, it is especially the case in art. This is shown by the fact that people still buy hand-made works over mass-produced prints, statues and such. This is one area in which an AI cannot outcompete a human: an AI cannot, by definition, create human made art (though we should expect AI forgeries). As long as people want human-made works, there will still be an economic niche for it (perhaps in a wat analogous to “native art”). It is easy to imagine a future in which self-aware AIs collect such work; perhaps to be ironic. Now, back to the pragmatics.

While billions are being spent on AIs, they are still lagging behind humans in some areas of creativity. For now. This will allow people to adapt or respond, should there be the will and ability to do so. There might even be some types or degree of quality of art that will remain beyond the limits of our technology. For example, AI might not be able to create masterpieces of literature or film. Then again, the technology might eventually be able to exceed human genius and do so in a cost-effective way. If so, then the creation of art by humans would be as economically viable as making horse-drawn buggies is today: a tiny niche. As with other cases of automation, this would be a loss for the creators, but perhaps a gain for the consumers. Unless, of course, we lose something intangible yet valuable when we surrender ever more to the machines.

 

While it is reasonable to consider the qualities of the creator when determining whether a work is art, it also makes sense to consider only the qualities of the work. On this approach, what makes a work art are the relevant qualities of that work, whatever these qualities might me. It also makes sense to consider that the effect these qualities on the audience could play a role in determining whether a work is art. For example, David Hume’s somewhat confusing theory of beauty seems to define beauty in terms of how the qualities of an object affect the audience.

Other thinkers, such as Plato, take beauty to be an objective feature of reality. Defining art in terms of objective beauty could entail that the qualities of the work determine whether it is art, assuming art is defined in terms of possessing the right sort of beauty. Given all the possibilities, it is fortunate that this essay does not require a theory of what qualities make a work art. All I need is the hypothesis, for the sake of discussion, that something being art is a matter of the qualities of the work—whatever they might be.

One practical reason to focus on the work rather than the artist (or other factors) is there can be cases where we don’t know about the artist or the context of the work. For example, the creators of many ancient works of art are unknown and judging whether these works of art would seem to require judging the work itself. Alternatively, one could take the view that no matter how beautiful a work is, if we do not know about the creator, we cannot say whether the work is art.  But this can be countered, at least in the case of works that predate AI.  We can assume the creators were human and much is known about humans that can be applied in sorting out whether the work is art.

A science fiction counter to this counter is to imagine alien works found by xenoarcheologists on other worlds. It We might know nothing about the creators of such works and there would be two possibilities. One is that there is no way to judge whether the work is art. The other is to accept that the work can be judged on its own, keeping in mind that the assessment could be mistaken.  

Another way to counter this is to consider the case of AI created works in the context of an aesthetic version of the Turing test. The classic Turing test involves two humans and a computer. One human communicates with the other human and the computer via text with the goal of trying to figure out which is human, and which is the computer. If the computer can pass as human long enough, it is said to have passed the Turing test. An aesthetic Turing test would also involve two humans and one computer. In this case, the human artist and the art computer would each create a work (or works), such as music, a sculpture or a drawing. The test must be set up so that it is not obvious who is who. For example, using a human artist whose style is well known, and a bad AI image generating program would not be a proper test. Matching a skilled, but obscure, human artist against a capable AI would be a fair test.

 After the works are created, the human judge would then attempt to discern which work was created by a human and which was created by AI. The judge would also be tasked with deciding whether each work is art. In this case, the judge knows nothing about the creator of a work and must judge the work based on the work itself. While it is tempting to think that a judge will easily tell a human work from AI, this would be a mistake. AI generated art can be quite sophisticated and can even be programmed to include the sort of “errors” that humans make when creating works. If the AI can pass the test, it would seem to be as much an artist as the human. If the work of the human is art, then the work of the AI that passes the test would thus also seem to be art.

As a side note, I have recently run into the problem of my drawings being mistaken for AI work. Since 2013 I have done birthday drawings for friends, posting the drawing on Facebook. Prior to the advent of AI image generators, people knew that I had created the work, and they (mistakenly) deemed it art. Now that AI image generators are good at reproducing photographs in styles that look hand drawn, people often think I am just posting an AI image of them. I am thus failing my own test. I will write more on this in a future essay but back to the topic at hand.

If whether a work is art depends on the qualities of the artist, then a judge who could not tell who created the works in the test would not be able to say which (if any) work was art.  Now, imagine that an AI controlled robot created a brushstroke-by-brushstroke identical painting as the human.  A judge could not tell which was created by a human and the judge must rule that neither work is art. However, this is an absurd result. One could also imagine a joke being played on the judge. After their judgment, they are told that painting A is by the human and B is by the computer and then they are asked to judge which is art again. After they reach their verdict, they are informed that the reverse was true and asked to judge again. This does show a problem with the view that whether something is art depends on the qualities of the creator. It seems to make more sense to make this depend on the qualities of the work

But there is a way to argue against this view using an analogy to a perfect counterfeit of a $100 bill. While the perfect counterfeit would be identical to the “real” money and utterly indistinguishable to all observations, it would still be a counterfeit because of its origin. Being legitimate currency is not a matter of the qualities of the money, but how the money is created and issued. The same, it could be argued, also applies to art. On this view a work created in the wrong way would not be art, even though it could be identical to a “real” work of art. But just as the perfect counterfeit would seem to destroy the value of the real bill (if one is known to be fake, but they cannot be told apart, then neither should be accepted) the “fake art” would also seem to destroy the art status of the “real art.” This would be odd but could be accepted by those who think that art, like money, is a social construct. But suppose one accepts that being art is a matter of the qualities of the work.

If it is the qualities of a work that makes a work art and AI can create works with those qualities, then the works would be art. If an AI cannot create works with those qualities, then the work of an AI would not be art.

As a philosopher, my discussions of art and AI tend to be on meta-aesthetic topics, such as trying to define “art” or arguing about whether an AI can create true art. But there are pragmatic concerns about AI taking jobs from artists and changing the field of art.  

When trying to sort out whether AI created images are art, one problem is that there is no necessary and sufficient definition of “art” that allows for a decisive answer. At this time, the question can only be answered within the context whatever theory of art you might favor. Being a work of art is like being a sin in that whether something is a sin is a matter of whether it is a sin in this or that religion. This is distinct from the question of whether it truly is a sin. Answering that would require determining which religion is right (and it might be none, so there might be no sin). So, no one can answer whether AI art is art until we know which, if any, theory of art has it right (if any). That said, it is possible to muddle about with what we must work with now.

One broad distinction between theories relevant to AI art is between theories focusing on the work and theories focusing on the creator. The first approach involves art requiring certain properties in the work for it to be art. The second approach is that the work be created in a certain way by a certain sort of being for it to be art. I will begin by looking at the creator focused approach.

In many theories of art, the nature of the creator is essential to distinguishing art from non-art. One example is Leo Tolstoy’s theory of art. As he sees it, the creation of art requires two steps. First, the creator must evoke in themselves a feeling they have once experienced. Second, by various external means (movement, colors, sounds, words, etc.) the creator must transmit that feeling to others so they can be infected by them. While there is more to the theory, such as ruling out directly causing feelings (like punching someone in anger that makes them angry in turn), this is the key to determining whether AI generated works can be art. Given Tolstoy’s theory, if an AI cannot feel an emotion, then it cannot, by definition, create art. It cannot evoke a feeling it has experienced, nor can it infect others with that feeling, since it has none. However, if an AI could feel emotion, then it could create art under Tolstoy’s definition. While the publicly available AI systems can appear to feel, there is yet a lack of adequate evidence that they do feel. But this could change.

While the focus of research is on artificial intelligence, there is also interest in artificial emotions, or at least the appearance of emotions. In the context of Tolstoy’s theory, the question would be whether it feels emotion or merely appears to feel. Interestingly, the same question also arises for human artists and in philosophy this is called the problem of other minds. This is the problem of determining whether other beings think or feel.

Tests already exist for discerning intelligence, such as Descartes’ language test and the more famous Turing Test. While it might be objected that a being could pass these tests by faking intelligence, the obvious reply is that faking intelligence so skillfully would seem to require intelligence. Or at least something functionally equivalent. To use an analogy, if someone could “fake” successfully repairing vehicles over and over, it would be odd to say that they were faking. In what way would their fakery differ from having skill if they could consistently make the repairs? The same would apply to intelligence. As such, theories of art that based on intelligence being an essential quality for being an artist (rather than emotion) would allow for a test to determine whether an AI could produce art.

Testing for real emotions is more challenging than testing for intelligence because the appearance of emotions can be faked by using an understanding of emotions. There are humans who do this. Some are actors and others are sociopaths. Some are both. So, testing for emotion (as opposed to testing for responses) is challenging and a capable enough agent could create the appearance of emotions without feeling them. Because of this, if Tolstoy’s theory or other emotional based theory is used to define art, then it seems impossible to know whether a work created by an AI would be art. In fact, it is worse than that.

Since the problem of other minds applies to humans, any theory of art that requires knowing what the artist felt (or thought) leaves us forever guessing—it is impossible to know what the artist was feeling or feeling at all. If we decide to take a practical approach and guess about what an artist might have been feeling and whether this is what the work is conveying, this will make it easier to accept AI created works as art. After all, a capable AI could create a work and a plausible emotional backstory for the creation of the work.

Critics of Tolstoy have pointed out that artists can create works that seem to be art without meeting his requirements in that an artist might have felt a different emotion from what the work seems to convey. For example, a depressed and suicidal musician might write a happy and upbeat song affirming the joy of life. Or the artist might have created the work without being driven by a particular emotion they sought to infect others with. Because of these and many other reasons, Tolstoy’s theory obviously does not give us the theory we need to answer the question of whether AI generated works can be art. That said, he does provide an excellent starting point for a general theory of AI and art in the context of defining art in terms of the artist. While the devil lies in the details, any artist focused theory of art can be addressed in the following manner.

If an AI can have the qualities an artist must have to create art, then an AI could create art. The challenge is sorting out what these qualities must be and determining if an AI has or even can have them. If an AI cannot have the qualities an artist must have to create art, then it cannot be an artist and cannot create art. As such, there is a straightforward template for applying artist focused theories of art to AI works. But, as noted above, this just allows us to know what the theory says about the work. The question will remain as to whether the theory is correct. In the next essay I will look at work focused approaches to theories of art.

 

The term “robot” and the idea of a robot rebellion were introduced by Karel Capek in Rossumovi Univerzální Roboti. “Robot” is derived from the Czech term for “forced labor” which was itself based on a term for slavery. Robots and slavery are thus linked in science-fiction. This leads to a philosophical question: can a machine be a slave? Sorting this matter out requires an adequate definition of slavery followed by determining whether the definition can fit a machine.

In simple terms, slavery is the ownership of a person by another person. While slavery is often seen in absolute terms (one is either enslaved or not), there are degrees of slavery in that the extent of ownership can vary. For example, a slave owner might grant their slaves some free time or allow them some limited autonomy. This is analogous to being ruled under a political authority in that there are degrees of being ruled and degrees of freedom under that rule.

Slavery is also often characterized in terms of forcing a person to engage in uncompensated labor. While this account does have some appeal, it is flawed. After all, it could be claimed that slaves are compensated by being provided with food, shelter and clothing. Slaves are sometimes even paid wages and there are cases in which slaves have purchased their own freedom using these wages. The Janissaries of the Ottoman Empire were slaves yet were paid and enjoyed a socioeconomic status above many of the free subjects of the empire.  As such, compelled unpaid labor is not the defining quality of slavery. However, it is intuitively plausible to regard compelled unpaid labor as a form of slavery in that the compeller purports to own the laborer’s time without consent or compensation.

Slaves are also often presented as powerless and abused, but this is not always the case. For example, the slave soldier Mamluks were treated as property that could be purchased, yet  enjoyed considerable status and power. The Janissaries, as noted above, also enjoyed considerable influence and power. There are free people who are powerless and routinely abused. Thus, being powerless and abused is neither necessary nor sufficient for slavery. As such, the defining characteristic of slavery is the claiming of ownership; that the slave is property.

Obviously, not all forms of ownership are slavery. My running shoes are not enslaved by me, nor is my smartphone. This is because shoes and smartphones lack the moral status required to be considered enslaved. The matter becomes more controversial when it comes to animals.

Most people accept that humans have the right to own animals. For example, a human who has a dog or cat is referred to as the pet’s owner. But there are people who take issue with the ownership of animals. While some philosophers, such as Kant and Descartes, regard animals as objects, other philosophers argue they have moral status. For example, some utilitarians accept that the capacity of animals to feel pleasure and pain grants them moral status. This is typically taken as a status that requires their suffering be considered rather than one that morally forbids their being owned. That is, it is seen as morally acceptable to own animals if they are treated well. There are even people who consider any ownership of animals to be wrong but their use of the term “slavery” for the ownership of animals seems more metaphorical than a considered philosophical position.

While I think that treating animals as property is morally wrong, I would not characterize the ownership of most animals as slavery. This is because most animals lack the status required to be enslaved. To use an analogy, denying animals religious freedom, the freedom of expression, the right to vote and so on does not oppress animals because they are not the sort of beings that can exercise these rights. This is not to say that animals cannot be wronged, just that their capabilities limit the wrongs that can be done to them. So, while an animal can be wronged by being cruelly confined, it cannot be wronged by denying it freedom of religion.

People, because of their capabilities, can be enslaved. This is because the claim of ownership over them is a denial of their rightful status. The problem is working out exactly what it is to be a person and this is something that philosophers have struggled with since the origin of the idea of persons. Fortunately, I do not need to provide such a definition when considering whether machines can be enslaved and can rely on an analogy to make my case.

While I believe that other humans are (usually) people, thanks to the problem of other minds I do not know that they are really people. Since I have no epistemic access to their (alleged) thoughts and feelings, I do not know if they have the qualities needed to be people or if they are just mindless automatons exhibiting an illusion of the personhood that I possess. Because of this, I must use an argument by analogy: these other beings act like I do, I am a person, so they are also people. To be consistent, I need to extend the same reasoning to beings that are not humans, which would include machines. After all, without cutting open the apparent humans I meet, I have no idea whether they are organic beings or machines. So, the mere appearance of being organic or mechanical is not relevant, I must judge by how the entity functions. For all I know, you are a machine. For all you know, I am a machine. Yet it seems reasonable to regard both of us as people.

While machines can engage in some person-like behavior now, they cannot yet pass this analogy test. That is, they cannot consistently exhibit the capacities exhibited by a known person, namely me. However, this does not mean that machines could never pass this test. That is, behave in ways that would be sufficient to be accepted as a person if it that behavior was done by an organic human.

A machine that could pass this test would merit being regarded as a person in the same way that humans passing this test merit this status. As such, if a human person can be enslaved, then a robot person could also be enslaved.

It is, of course, tempting to ask if a robot with such behavior would really be a person. The same question can be asked about humans, thanks to that problem of other minds.

 

This is the last of the virtual cheating series and the focus is on virtual people. The virtual aspect is easy enough to define; these are entities that exist entirely within the realm of computer memory and do not exist as physical beings in that they lack bodies of the traditional sort. They are, of course, physical beings in the broad sense, existing as data within physical memory systems.

An example of such a virtual being is a non-player character (NPC) in a video game. These coded entities range from enemies that fight the player to characters that engage in the illusion of conversation. As it now stands, these NPCs are simple beings, though players can have very strong emotional responses and even (one-sided) relationships with them. Bioware and Larian Studios excel at creating NPCs that players get very involved in and their games often feature elaborate relationship and romance systems.

While these coded entities are usually designed to look like and imitate the behavior of people, they are not people. They are, at best, the illusion of people. As such, while humans could become emotionally attached to these virtual entities (just as humans can become attached to objects), the idea of cheating with an NPC is on par with the idea of cheating with your phone.

As technology improves, virtual people will become more and more person-like. As with the robots discussed in the previous essay, if a virtual person were a person, then cheating would seem possible. Also, as with the discussion of robots, there could be degrees of virtual personhood, thus allowing for degrees of cheating. Since virtual people are essentially robots in the virtual world, the discussion of robots in that essay applies analogously to the virtual robots of the virtual world. There is, however, one obvious break in the analogy: unlike robots, virtual people lack physical bodies. This leads to the question of whether a human can virtually cheat with a virtual person or if cheating requires a physical sexual component that a virtual being cannot possess.

While, as discussed in a previous essay, there is a form of virtual sex that involves physical devices that stimulate the sexual organs, this is not “pure” virtual sex. After all, the user is using a VR headset to “look” at the partner, but the stimulation is all done mechanically. Pure virtual sex would require the sci-fi sort of virtual reality of cyberpunk: a person fully “jacked in” to the virtual reality so all the inputs and outputs are directly to and from the brain. The person would have a virtual body in the virtual reality that mediates their interaction with that world, rather than having crude devices stimulating their physical body.

Assuming the technology is good enough, a person could have virtual sex with a virtual person (or another person who is also jacked into the virtual world). On the one hand, this would obviously not be sex in the usual sense as those involved would have no physical contact. This would avoid many of the usual harms of traditional cheating as STDs and pregnancies would be impossible (although sexual malware and virtual babies might be possible). This does leave open the door for concerns about emotional infidelity.

If the virtual experience is indistinguishable from the experience of physical sex, then it could be argued that the lack of physical contact is irrelevant. At this point, the classic problem of the external world becomes relevant. The gist of this problem is that because I cannot get outside of my experiences to “see” that they are really being caused by external things that seem to be causing them, I can never know if there is really an external world. For all I know, I am dreaming right now or already in a virtual world. While this is usually seen as the nightmare scenario in epistemology, George Berkeley embraced this view in his idealism. He argued that there is no metaphysical matter and that “to be is to be perceived.” On his view, all that exists are minds and within them are ideas. Crudely put, Berkeley’s reality is virtual and God is the server. Berkely stresses that he does not, for example, deny that apples or rocks exist. They do and can be experienced, they are just not made out of metaphysical matter but are composed of ideas.

So, if cheating is defined in a way that requires physical sexual activity, knowing whether a person is cheating or not requires solving the problem of the external world. There is the philosophical possibility that there never has been any cheating since there might be no physical world. If sexual activity is instead defined in terms of behavior and sensations without references to a need for physical systems, then virtual cheating would be possible, assuming the technology can reach the required level.  

While this discussion of virtual cheating is currently theoretical, it does provide an interesting way to explore what it is about cheating (if anything) that is wrong. As noted at the start of the series, many of the main concerns about cheating are physical concerns about STDs and pregnancy. These concerns are avoided by virtual cheating. What remains are the emotions of those involved and the agreements between them. As a practical matter, the future is likely to see people working out the specifics of their relationships in terms of what sort of virtual and robotic activities are allowed and which are forbidden. While people can simply agree to anything, there is the deeper question of the rational foundation of relationship boundaries. For example, whether it is reasonable to consider interaction with a sexbot cheating or elaborate masturbation. A brave new world awaits and perhaps what happens in VR will stay in VR.

 

While science fiction has speculated about robot-human sex and romance, current technology offers little more than sex dolls. In terms of the physical aspects of sexual activity, the development of more “active” sexbots is an engineering problem; getting the machinery to perform properly and in ways that are safe for the user (or unsafe, if that is what one wants). Regarding cheating, while a suitably advanced sexbot could actively engage in sexual activity with a human, the sexbot would not be a person and hence the standard definition of cheating (as discussed in the previous essays) would not be met. This is because sexual activity with such a sexbot would be analogous to using any other sex toy (such as a simple “blow up doll” or vibrator). Since a person cannot cheat with an object, such activity would not be cheating. Some people might take issue with their partner sexing it up with a sexbot and forbid such activity. While a person who broke such an agreement about robot sex would be acting wrongly, they would not be cheating. Unless, of course, the sexbot was enough like a person for cheating to occur.

There are already efforts to make sexbots more like people in terms of their “mental” functions. For example, being able to create the illusion of conversation via AI. As such efforts progress and sexbots act more and more like people, the philosophical question of whether they really are people will become increasingly important to address. While the main moral concerns would be about the ethics of how sexbots are treated, there is also the matter of cheating.

If a sexbot were a person, then it would be possible to cheat with them; just as one could cheat with an organic person. The fact that a sexbot might be purely mechanical would not be relevant to the ethics of the cheating, what would matter would be that a person was engaging in sexual activity with another person when their relationship with another person forbids such behavior.

It could be objected that the mechanical nature of the sexbot would matter because sex requires organic parts of the right sort and thus a human cannot really have sex with a sexbot, no matter how the parts of the robot are shaped.

One counter to this is to use a functional argument. To draw an analogy to the philosophy of mind known as functionalism, it could be argued that the composition of the relevant parts does not matter, what matters is their functional role. A such, a human could have sex with a sexbot that had parts that functioned in the right way.

Another counter is to argue that the composition of the parts does not matter, rather it is the sexual activity with a person that matters. To use an analogy, a human could cheat on another human even if their only sexual contact with the other human involved sex toys. In this case, what matters is that the activity is sexual and involves people, not that objects rather than body parts are used. As such, sex with a sexbot person could be cheating if the human was breaking their commitment.

While knowing whether a sexbot is a person would (mostly) settle the cheating issue, there remains the epistemic problem of other minds. In this case, the problem is determining whether a sexbot has a mind that qualifies them as a person. There can, of course, be varying degrees of confidence in the determination and there could also be degrees of personness. Or, rather, degrees of how person-like a sexbot might be.

Thanks to Descartes and Turing, there is a language test for having a mind. If a sexbot can engage in conversation that is indistinguishable from conversation with a human, then it would be reasonable to regard the sexbot as a person. That said, there might be good reasons for having a more extensive testing system for personhood which might include testing for emotions and self-awareness. But, from a practical standpoint, if a sexbot can engage in a level of behavior that would qualify them for person status if they were a human capable of that behavior, then it would be just as reasonable to accept the sexbot as a person. To do otherwise would seem to be mere prejudice. As such, a human person could cheat with a sexbot that could pass this test. At least it would be cheating as far as we knew.

Since it will be a long time (if ever) before a sexbot person is constructed, what is of immediate concern are sexbots that are person-like. That is, they do not meet the standards that would qualify a human as a person, yet have behavior that is sophisticated enough that they seem to be more than objects. One might consider an analogy here to animals: they do not qualify as human-level people, but their behavior does qualify them for a moral status above that of objects (at least for most moral philosophers and all decent people). In this case, the question about cheating becomes a question of whether the sexbot is person-like enough to enable cheating to take place.

One approach is to consider the matter from the perspective of the human. If the human engaged in sexual activity with the sexbot regards them as being person-like enough, then the activity can be seen as cheating because they would believe they are cheating.  An objection to this is that it does not matter what the human thinks about the sexbot, what matters is its actual status. After all, if a human regards a human they are cheating with as an object, this does not mean they are not cheating. Likewise, if a human feels like they are cheating, it does not mean they really are.

This can be countered by arguing that how the human feels does matter. After all, if the human thinks they are cheating and they are engaging in the behavior, they are still acting wrongly. To use an analogy, if a person thinks they are stealing something and takes it anyway, they  have acted wrongly even if it turns out that they were not stealing. The obvious objection to this line of reasoning is that while a person who thinks they are stealing did act wrongly by engaging in what they thought was theft, they did not actually commit a theft. Likewise, a person who thinks they are engaging in cheating, but are not, would be acting wrongly in that they are doing something they think is wrong, but not cheating.

Another approach is to consider the matter objectively so that the degree of cheating would be proportional to the degree that the sexbot is person-like. On this view, cheating with a person-like sexbot would not be as bad as cheating with a full person. The obvious objection is that one is either cheating or not; there are no degrees of cheating. The obvious counter is to try to appeal to the intuition that there could be degrees of cheating in this manner. To use an analogy, just as there can be degrees of cheating in terms of the sexual activity engaged in, there can also be degrees of cheating in terms of how person-like the sexbot is.

While person-like sexbots are still the stuff of science fiction, I suspect the future will see some interesting divorce cases in which this matter is debated in court.

 

Due to the execution of a health insurance CEO, public attention is focused on health care. The United States has expensive health care, and this is working as intended to generate profits. Many Americans are uninsured or underinsured and even those who have insurance can find that their care is not covered. As has been repeatedly pointed out in the wake of the execution, there is a health care crisis in the United States and it is one that has been intentionally created.

Americans are a creative and generous people, which explains why people have turned to GoFundMe to get money for medical expenses. Medical bills can be ruinous and lead to bankruptcy for hundreds of thousands of Americans each year. A GoFundMe campaign can help a person pay their bills, get the care they need and avoid financial ruin. Friends of mine have been forced to undertake such campaigns and I have donated to them, as have many other people. In my own case, I am lucky and have a job that offers insurance coverage at a price I can afford, and my modest salary allows me to meet the medical expenses for a very healthy person with no pre-existing conditions. However, I know that like most of us,  I am one medical disaster away from financial ruin. As such, I have followed the use of GoFundMe for medical expenses with some practical interest. I have also given it some thought from a philosophical perspective.

On the one hand, the success of certain GoFundMe campaigns to cover such expenses suggests that people are morally decent and are willing to expend their own resources to help others. While GoFundMe does profit from these donations, their take is modest. They are not engaged in gouging people in need and exploiting medical necessities for absurdly high profits. That is the job of the health insurance industry.

On the other hand, there is the moral concern that in a wealthy country replete with billionaires and millionaires, many people must beg for money to meet their medical expenses. This spotlights the excessive cost of healthcare, the relatively low earnings of many Americans, and the weakness of the nation’s safety net. While those who donate out of generosity and compassion merit moral praise, the need for such donations merits moral condemnation. People should not need to beg for money to pay for their medical care. 

To anticipate an objection, I am aware that people do use GoFundMe for frivolous things and there are scammers, but my concern is with the fact that some people do need to turn to crowdfunding to pay their bills.

While donating is morally laudable, there are concerns about this method of funding. One practical problem is that it depends on the generosity of others. It is not a systematic and dependable method of funding. As such, it is a gamble to rely on it.

A second problem is that it depends on running an effective social media campaign. Like any other crowdfunding, success depends on getting attention and persuading people to donate. Those who have the time, resources and skills to run effective social media campaigns (or who have help) are more likely to succeed. This is concerning because people facing serious medical expenses are often in no condition to undertake the challenges of running a social media campaign. This is not to criticize or condemn people who can do this or recruit others. My point is that this method is no substitute for a systematic and consistent approach to funding health care.

A third problem is that success depends on the appeal of the medical condition and the person with that condition. While a rational approach to funding would be based on merit and need, there are clearly conditions and people that are more appealing in terms of attracting donors. For example, certain diseases and conditions can be “in” and generate sympathy, while others are not as appealing. In the case of people, we are not all equal in how appealing we are to others. As with the other problems, I do not condemn or criticize people for having conditions that are “in” or being appealing. Rather, my concern is that this method rests so heavily on these factors rather than medical and financial need. Once again, this serves to illustrate how the current system has been willfully broken and does not serve the needs of most Americans. While those who have succeeded in their GoFundMe campaigns should be lauded for their effort and ingenuity, those who run the health care system in which people have to run social media campaigns to afford their health care should be condemned.   

The execution of CEO Brian Thompson has brought the dystopian but highly profitable American health care system into the spotlight. While some are rightfully expressing compassion for Thompson’s family, the overwhelming tide of commentary is about the harms Americans suffer because of the way the health care system is operated. In many ways, this incident exposes many aspects of the American nightmare such as dystopian health care, the rule of oligarchs, the surveillance state, and gun violence.

As this is being written the identity and motives of the shooter are not known. However, the evidence suggests that he had an experience with the company that was bad enough he decided to execute the CEO. The main evidence for this is the words written on his shell casings (deny”, “depose”, and “defend”) that reference the tactics used by health insurance companies to avoid paying for care. Given the behavior of insurance companies in general and United Healthcare in particular, this inference makes sense.

The United States spends $13,000 per year per person on health care, although this is just the number you get when you divide the total spending by the total number of people. Obviously, we don’t each get $13,000 each year. Despite this, we have worse health outcomes than many other countries that spend less than half of what we do, and American life expectancy is dropping. It is estimated that about 85 million people are either without health care insurance or are underinsured.

It is estimated that between 45,000 and 60,000 Americans die each year because they cannot get access to health care on time, with many of these deaths attributed to a lack of health insurance. Even those who can get access to health care face dire consequences in that about 500,000 Americans go bankrupt because of medical debt. In contrast, health insurance companies are doing very well. In 2023, publicly traded health insurance companies experienced a 10.4% increase in total GAAP revenue reaching a total of $1.07 trillion. Thomson himself had an annual compensation package of $10.2 million.

In addition to the cold statistics, almost everyone in America has a bad story about health insurance. One indication that health insurance is a nightmare is the number of GoFundMe fundraisers for medica expenses. The company even has a guide to setting up your own medical fundraiser. Like many people, I have given to such fundraisers such as when a high school friend could not pay for his treatment. He is dead now.

My own story is a minor one, but the fact that a college professor with “good” insurance has a story also illustrates the problem. When I had my quadriceps repair surgery, the doctor told me that my insurance had stopped covering the leg brace because they deemed it medically unnecessary. The doctor said that it was absolutely necessary, and he was right. So, I had to buy a $500 brace that my insurance did not cover. I could afford it, but $500 is a lot of money for most of us.

Like most Americans, I have friends who have truly nightmarish stories of unceasing battles with insurance companies to secure health care for themselves or family. Similar stories flooded social media, filling out the statistics with the suffering of people. While most people did not applaud the execution, it was clear that Americans hate the health insurance industry and do so for good reason. But is the killing of a CEO morally justified?

There is a general moral presumption that killing people is wrong and we rightfully expect a justification if someone claims that a killing was morally acceptable. In addition to the moral issue, there is also the question of the norms of society. Robert Pape, director of the University of Chicago’s project on security and threats, has claimed that Americans are increasingly accepting violence as a means of settling civil disputes and that this one incident shows that “the norms of violence are spreading into the commercial sector.” While Pape does make a reasonable point, violence has long been a part of the commercial sector although this has mostly been the use of violence against workers in general and unions in particular. Gun violence is also “normal” in the United States in that it occurs regularly. As such, the killing does see to be within the norms of America, although the killing of a CEO is unusual.

While it must be emphasized that the motive of the shooter is not known, the speculation is that he was harmed in some manner by the heath insurance company. While we do not yet know his story, we do know that people suffer or die from lack of affordable insurance and when insurance companies deny them coverage for treatment.

Philosophers draw a moral distinction between killing and letting people die and insurance companies can make the philosophical argument that they are not killing people or inflicting direct harm. They are just letting people suffer or die for financial reasons when they can be helped. When it comes to their compensation packages, CEOs and upper management defend their exorbitant compensation by arguing that they are the ones making the big decisions and leading the company. If we take them at their word, then this entails that they also deserve the largest share of moral accountability. That is, if a company’s actions are causing death and suffering, then the CEO and other leadership are the ones who deserve a package of blame to match their compensation package.

It is important to distinguish moral accountability from legal accountability. Corporations exist, in large part, to concentrate wealth at the top while distributing legal accountability. Even when they commit criminal activity, “it’s rare for top executives – especially at larger companies – to face personal punishment.” One reason for this is that the United States is an oligarchy rather than a democracy and the laws are written to benefit the wealthy. This is not to say that corporate leaders are above the law; they are not. They are wrapped in the law, and it generally serves them well as armor against accountability. For the lower classes, the law is more often a sword employed to rob and otherwise harm them. As such, one moral justification for an individual using violence against a CEO or other corporate leader is that might be the only way they will face meaningful consequences for their crimes.

The social contract is supposed to ensure that everyone faces consequences and when this is not the case, then the social contract loses its validity. To borrow from Glaucon in Plato’s Republic, it would be foolish to be restrained by “justice” when others are harming you without such restraint.  But it might be objected, while health insurance companies do face legal scrutiny, denying coverage and making health care unaffordable for many Americans is legal. As such, these are not crimes and CEOs, and corporate leaders should not be harmed for inflicting such harm.

While it is true that corporations can legally get away with letting people die and even causing their deaths, this is where morality enters the picture. While there are philosophical views that morality is determined by the law, these views have many obvious problems, not the least of which is that they are counterintuitive.

If people are morally accountable for the harm they inflict and can be justly punished and the legal system ignores such harm, then it would follow that individuals have the moral right to act. In terms of philosophical justification, John Locke provides an excellent basis. If a corporation can cause unjustified harm to the life and property of people and the state allows this, then the corporations have returned themselves and their victims to the state of nature because, in effect, the state does not exist in this context. In this situation, everyone has the right to defend themselves and others from such unjust incursions and this, as Locke argued, can involve violence and even lethal force.

It might be objected that such vigilante justice would harm society, and that people should rely on the legal system for recourse. But that is exactly the problem: the people running the state have allowed the corporations to mostly do as they wish to their victims with little consequence and have removed the protection of the law. It is they who have created a situation where vigilante justice might be the only meaningful recourse of the citizen. To complain about eroding norms is a mistake, because the norm is for corporations and the elites to get away with moral crimes with little consequence. For people to fight back against this can be seen as desperate attempts at some justice.

As the Trump administration is likely to see a decrease in even the timid and limited efforts to check corporate wrongdoing, it seems likely there will be more incidents of people going after corporate leaders. Much of the discussion among the corporations is about the need to protect corporate leaders and we can expect lawmakers and the police to step up to offer even more protection to the oligarchs from the people they are hurting.

Politicians could take steps to solve the health care crisis that the for-profit focus of health care has caused and some, such have Bernie Sanders, honestly want to do that. In closing, one consequence of the killing is that Anthem decided to rescind their proposed anesthesia policy. Anthem Blue Cross Blue Shield plans representing Connecticut, New York and Missouri had said they would no longer pay for anesthesia care if a procedure goes beyond an arbitrary time limit, regardless of how long it takes. This illustrates our dystopia: this would have been allowed by the state that is supposed to protect us, but the execution of a health insurance CEO made the leaders of Anthem rethink their greed. This is not how things should be. In a better world Thompson would be alive, albeit not as rich,  and spending the holidays with his family. And so would the thousands of Americans who died needlessly because of greed and cruelty.