As a philosopher, my discussions of art and AI tend to be on meta-aesthetic topics, such as trying to define “art” or arguing about whether an AI can create true art. But there are pragmatic concerns about AI taking jobs from artists and changing the field of art.  

When trying to sort out whether AI created images are art, one problem is that there is no necessary and sufficient definition of “art” that allows for a decisive answer. At this time, the question can only be answered within the context whatever theory of art you might favor. Being a work of art is like being a sin in that whether something is a sin is a matter of whether it is a sin in this or that religion. This is distinct from the question of whether it truly is a sin. Answering that would require determining which religion is right (and it might be none, so there might be no sin). So, no one can answer whether AI art is art until we know which, if any, theory of art has it right (if any). That said, it is possible to muddle about with what we must work with now.

One broad distinction between theories relevant to AI art is between theories focusing on the work and theories focusing on the creator. The first approach involves art requiring certain properties in the work for it to be art. The second approach is that the work be created in a certain way by a certain sort of being for it to be art. I will begin by looking at the creator focused approach.

In many theories of art, the nature of the creator is essential to distinguishing art from non-art. One example is Leo Tolstoy’s theory of art. As he sees it, the creation of art requires two steps. First, the creator must evoke in themselves a feeling they have once experienced. Second, by various external means (movement, colors, sounds, words, etc.) the creator must transmit that feeling to others so they can be infected by them. While there is more to the theory, such as ruling out directly causing feelings (like punching someone in anger that makes them angry in turn), this is the key to determining whether AI generated works can be art. Given Tolstoy’s theory, if an AI cannot feel an emotion, then it cannot, by definition, create art. It cannot evoke a feeling it has experienced, nor can it infect others with that feeling, since it has none. However, if an AI could feel emotion, then it could create art under Tolstoy’s definition. While the publicly available AI systems can appear to feel, there is yet a lack of adequate evidence that they do feel. But this could change.

While the focus of research is on artificial intelligence, there is also interest in artificial emotions, or at least the appearance of emotions. In the context of Tolstoy’s theory, the question would be whether it feels emotion or merely appears to feel. Interestingly, the same question also arises for human artists and in philosophy this is called the problem of other minds. This is the problem of determining whether other beings think or feel.

Tests already exist for discerning intelligence, such as Descartes’ language test and the more famous Turing Test. While it might be objected that a being could pass these tests by faking intelligence, the obvious reply is that faking intelligence so skillfully would seem to require intelligence. Or at least something functionally equivalent. To use an analogy, if someone could “fake” successfully repairing vehicles over and over, it would be odd to say that they were faking. In what way would their fakery differ from having skill if they could consistently make the repairs? The same would apply to intelligence. As such, theories of art that based on intelligence being an essential quality for being an artist (rather than emotion) would allow for a test to determine whether an AI could produce art.

Testing for real emotions is more challenging than testing for intelligence because the appearance of emotions can be faked by using an understanding of emotions. There are humans who do this. Some are actors and others are sociopaths. Some are both. So, testing for emotion (as opposed to testing for responses) is challenging and a capable enough agent could create the appearance of emotions without feeling them. Because of this, if Tolstoy’s theory or other emotional based theory is used to define art, then it seems impossible to know whether a work created by an AI would be art. In fact, it is worse than that.

Since the problem of other minds applies to humans, any theory of art that requires knowing what the artist felt (or thought) leaves us forever guessing—it is impossible to know what the artist was feeling or feeling at all. If we decide to take a practical approach and guess about what an artist might have been feeling and whether this is what the work is conveying, this will make it easier to accept AI created works as art. After all, a capable AI could create a work and a plausible emotional backstory for the creation of the work.

Critics of Tolstoy have pointed out that artists can create works that seem to be art without meeting his requirements in that an artist might have felt a different emotion from what the work seems to convey. For example, a depressed and suicidal musician might write a happy and upbeat song affirming the joy of life. Or the artist might have created the work without being driven by a particular emotion they sought to infect others with. Because of these and many other reasons, Tolstoy’s theory obviously does not give us the theory we need to answer the question of whether AI generated works can be art. That said, he does provide an excellent starting point for a general theory of AI and art in the context of defining art in terms of the artist. While the devil lies in the details, any artist focused theory of art can be addressed in the following manner.

If an AI can have the qualities an artist must have to create art, then an AI could create art. The challenge is sorting out what these qualities must be and determining if an AI has or even can have them. If an AI cannot have the qualities an artist must have to create art, then it cannot be an artist and cannot create art. As such, there is a straightforward template for applying artist focused theories of art to AI works. But, as noted above, this just allows us to know what the theory says about the work. The question will remain as to whether the theory is correct. In the next essay I will look at work focused approaches to theories of art.

 

The term “robot” and the idea of a robot rebellion were introduced by Karel Capek in Rossumovi Univerzální Roboti. “Robot” is derived from the Czech term for “forced labor” which was itself based on a term for slavery. Robots and slavery are thus linked in science-fiction. This leads to a philosophical question: can a machine be a slave? Sorting this matter out requires an adequate definition of slavery followed by determining whether the definition can fit a machine.

In simple terms, slavery is the ownership of a person by another person. While slavery is often seen in absolute terms (one is either enslaved or not), there are degrees of slavery in that the extent of ownership can vary. For example, a slave owner might grant their slaves some free time or allow them some limited autonomy. This is analogous to being ruled under a political authority in that there are degrees of being ruled and degrees of freedom under that rule.

Slavery is also often characterized in terms of forcing a person to engage in uncompensated labor. While this account does have some appeal, it is flawed. After all, it could be claimed that slaves are compensated by being provided with food, shelter and clothing. Slaves are sometimes even paid wages and there are cases in which slaves have purchased their own freedom using these wages. The Janissaries of the Ottoman Empire were slaves yet were paid and enjoyed a socioeconomic status above many of the free subjects of the empire.  As such, compelled unpaid labor is not the defining quality of slavery. However, it is intuitively plausible to regard compelled unpaid labor as a form of slavery in that the compeller purports to own the laborer’s time without consent or compensation.

Slaves are also often presented as powerless and abused, but this is not always the case. For example, the slave soldier Mamluks were treated as property that could be purchased, yet  enjoyed considerable status and power. The Janissaries, as noted above, also enjoyed considerable influence and power. There are free people who are powerless and routinely abused. Thus, being powerless and abused is neither necessary nor sufficient for slavery. As such, the defining characteristic of slavery is the claiming of ownership; that the slave is property.

Obviously, not all forms of ownership are slavery. My running shoes are not enslaved by me, nor is my smartphone. This is because shoes and smartphones lack the moral status required to be considered enslaved. The matter becomes more controversial when it comes to animals.

Most people accept that humans have the right to own animals. For example, a human who has a dog or cat is referred to as the pet’s owner. But there are people who take issue with the ownership of animals. While some philosophers, such as Kant and Descartes, regard animals as objects, other philosophers argue they have moral status. For example, some utilitarians accept that the capacity of animals to feel pleasure and pain grants them moral status. This is typically taken as a status that requires their suffering be considered rather than one that morally forbids their being owned. That is, it is seen as morally acceptable to own animals if they are treated well. There are even people who consider any ownership of animals to be wrong but their use of the term “slavery” for the ownership of animals seems more metaphorical than a considered philosophical position.

While I think that treating animals as property is morally wrong, I would not characterize the ownership of most animals as slavery. This is because most animals lack the status required to be enslaved. To use an analogy, denying animals religious freedom, the freedom of expression, the right to vote and so on does not oppress animals because they are not the sort of beings that can exercise these rights. This is not to say that animals cannot be wronged, just that their capabilities limit the wrongs that can be done to them. So, while an animal can be wronged by being cruelly confined, it cannot be wronged by denying it freedom of religion.

People, because of their capabilities, can be enslaved. This is because the claim of ownership over them is a denial of their rightful status. The problem is working out exactly what it is to be a person and this is something that philosophers have struggled with since the origin of the idea of persons. Fortunately, I do not need to provide such a definition when considering whether machines can be enslaved and can rely on an analogy to make my case.

While I believe that other humans are (usually) people, thanks to the problem of other minds I do not know that they are really people. Since I have no epistemic access to their (alleged) thoughts and feelings, I do not know if they have the qualities needed to be people or if they are just mindless automatons exhibiting an illusion of the personhood that I possess. Because of this, I must use an argument by analogy: these other beings act like I do, I am a person, so they are also people. To be consistent, I need to extend the same reasoning to beings that are not humans, which would include machines. After all, without cutting open the apparent humans I meet, I have no idea whether they are organic beings or machines. So, the mere appearance of being organic or mechanical is not relevant, I must judge by how the entity functions. For all I know, you are a machine. For all you know, I am a machine. Yet it seems reasonable to regard both of us as people.

While machines can engage in some person-like behavior now, they cannot yet pass this analogy test. That is, they cannot consistently exhibit the capacities exhibited by a known person, namely me. However, this does not mean that machines could never pass this test. That is, behave in ways that would be sufficient to be accepted as a person if it that behavior was done by an organic human.

A machine that could pass this test would merit being regarded as a person in the same way that humans passing this test merit this status. As such, if a human person can be enslaved, then a robot person could also be enslaved.

It is, of course, tempting to ask if a robot with such behavior would really be a person. The same question can be asked about humans, thanks to that problem of other minds.

 

This is the last of the virtual cheating series and the focus is on virtual people. The virtual aspect is easy enough to define; these are entities that exist entirely within the realm of computer memory and do not exist as physical beings in that they lack bodies of the traditional sort. They are, of course, physical beings in the broad sense, existing as data within physical memory systems.

An example of such a virtual being is a non-player character (NPC) in a video game. These coded entities range from enemies that fight the player to characters that engage in the illusion of conversation. As it now stands, these NPCs are simple beings, though players can have very strong emotional responses and even (one-sided) relationships with them. Bioware and Larian Studios excel at creating NPCs that players get very involved in and their games often feature elaborate relationship and romance systems.

While these coded entities are usually designed to look like and imitate the behavior of people, they are not people. They are, at best, the illusion of people. As such, while humans could become emotionally attached to these virtual entities (just as humans can become attached to objects), the idea of cheating with an NPC is on par with the idea of cheating with your phone.

As technology improves, virtual people will become more and more person-like. As with the robots discussed in the previous essay, if a virtual person were a person, then cheating would seem possible. Also, as with the discussion of robots, there could be degrees of virtual personhood, thus allowing for degrees of cheating. Since virtual people are essentially robots in the virtual world, the discussion of robots in that essay applies analogously to the virtual robots of the virtual world. There is, however, one obvious break in the analogy: unlike robots, virtual people lack physical bodies. This leads to the question of whether a human can virtually cheat with a virtual person or if cheating requires a physical sexual component that a virtual being cannot possess.

While, as discussed in a previous essay, there is a form of virtual sex that involves physical devices that stimulate the sexual organs, this is not “pure” virtual sex. After all, the user is using a VR headset to “look” at the partner, but the stimulation is all done mechanically. Pure virtual sex would require the sci-fi sort of virtual reality of cyberpunk: a person fully “jacked in” to the virtual reality so all the inputs and outputs are directly to and from the brain. The person would have a virtual body in the virtual reality that mediates their interaction with that world, rather than having crude devices stimulating their physical body.

Assuming the technology is good enough, a person could have virtual sex with a virtual person (or another person who is also jacked into the virtual world). On the one hand, this would obviously not be sex in the usual sense as those involved would have no physical contact. This would avoid many of the usual harms of traditional cheating as STDs and pregnancies would be impossible (although sexual malware and virtual babies might be possible). This does leave open the door for concerns about emotional infidelity.

If the virtual experience is indistinguishable from the experience of physical sex, then it could be argued that the lack of physical contact is irrelevant. At this point, the classic problem of the external world becomes relevant. The gist of this problem is that because I cannot get outside of my experiences to “see” that they are really being caused by external things that seem to be causing them, I can never know if there is really an external world. For all I know, I am dreaming right now or already in a virtual world. While this is usually seen as the nightmare scenario in epistemology, George Berkeley embraced this view in his idealism. He argued that there is no metaphysical matter and that “to be is to be perceived.” On his view, all that exists are minds and within them are ideas. Crudely put, Berkeley’s reality is virtual and God is the server. Berkely stresses that he does not, for example, deny that apples or rocks exist. They do and can be experienced, they are just not made out of metaphysical matter but are composed of ideas.

So, if cheating is defined in a way that requires physical sexual activity, knowing whether a person is cheating or not requires solving the problem of the external world. There is the philosophical possibility that there never has been any cheating since there might be no physical world. If sexual activity is instead defined in terms of behavior and sensations without references to a need for physical systems, then virtual cheating would be possible, assuming the technology can reach the required level.  

While this discussion of virtual cheating is currently theoretical, it does provide an interesting way to explore what it is about cheating (if anything) that is wrong. As noted at the start of the series, many of the main concerns about cheating are physical concerns about STDs and pregnancy. These concerns are avoided by virtual cheating. What remains are the emotions of those involved and the agreements between them. As a practical matter, the future is likely to see people working out the specifics of their relationships in terms of what sort of virtual and robotic activities are allowed and which are forbidden. While people can simply agree to anything, there is the deeper question of the rational foundation of relationship boundaries. For example, whether it is reasonable to consider interaction with a sexbot cheating or elaborate masturbation. A brave new world awaits and perhaps what happens in VR will stay in VR.

 

While science fiction has speculated about robot-human sex and romance, current technology offers little more than sex dolls. In terms of the physical aspects of sexual activity, the development of more “active” sexbots is an engineering problem; getting the machinery to perform properly and in ways that are safe for the user (or unsafe, if that is what one wants). Regarding cheating, while a suitably advanced sexbot could actively engage in sexual activity with a human, the sexbot would not be a person and hence the standard definition of cheating (as discussed in the previous essays) would not be met. This is because sexual activity with such a sexbot would be analogous to using any other sex toy (such as a simple “blow up doll” or vibrator). Since a person cannot cheat with an object, such activity would not be cheating. Some people might take issue with their partner sexing it up with a sexbot and forbid such activity. While a person who broke such an agreement about robot sex would be acting wrongly, they would not be cheating. Unless, of course, the sexbot was enough like a person for cheating to occur.

There are already efforts to make sexbots more like people in terms of their “mental” functions. For example, being able to create the illusion of conversation via AI. As such efforts progress and sexbots act more and more like people, the philosophical question of whether they really are people will become increasingly important to address. While the main moral concerns would be about the ethics of how sexbots are treated, there is also the matter of cheating.

If a sexbot were a person, then it would be possible to cheat with them; just as one could cheat with an organic person. The fact that a sexbot might be purely mechanical would not be relevant to the ethics of the cheating, what would matter would be that a person was engaging in sexual activity with another person when their relationship with another person forbids such behavior.

It could be objected that the mechanical nature of the sexbot would matter because sex requires organic parts of the right sort and thus a human cannot really have sex with a sexbot, no matter how the parts of the robot are shaped.

One counter to this is to use a functional argument. To draw an analogy to the philosophy of mind known as functionalism, it could be argued that the composition of the relevant parts does not matter, what matters is their functional role. A such, a human could have sex with a sexbot that had parts that functioned in the right way.

Another counter is to argue that the composition of the parts does not matter, rather it is the sexual activity with a person that matters. To use an analogy, a human could cheat on another human even if their only sexual contact with the other human involved sex toys. In this case, what matters is that the activity is sexual and involves people, not that objects rather than body parts are used. As such, sex with a sexbot person could be cheating if the human was breaking their commitment.

While knowing whether a sexbot is a person would (mostly) settle the cheating issue, there remains the epistemic problem of other minds. In this case, the problem is determining whether a sexbot has a mind that qualifies them as a person. There can, of course, be varying degrees of confidence in the determination and there could also be degrees of personness. Or, rather, degrees of how person-like a sexbot might be.

Thanks to Descartes and Turing, there is a language test for having a mind. If a sexbot can engage in conversation that is indistinguishable from conversation with a human, then it would be reasonable to regard the sexbot as a person. That said, there might be good reasons for having a more extensive testing system for personhood which might include testing for emotions and self-awareness. But, from a practical standpoint, if a sexbot can engage in a level of behavior that would qualify them for person status if they were a human capable of that behavior, then it would be just as reasonable to accept the sexbot as a person. To do otherwise would seem to be mere prejudice. As such, a human person could cheat with a sexbot that could pass this test. At least it would be cheating as far as we knew.

Since it will be a long time (if ever) before a sexbot person is constructed, what is of immediate concern are sexbots that are person-like. That is, they do not meet the standards that would qualify a human as a person, yet have behavior that is sophisticated enough that they seem to be more than objects. One might consider an analogy here to animals: they do not qualify as human-level people, but their behavior does qualify them for a moral status above that of objects (at least for most moral philosophers and all decent people). In this case, the question about cheating becomes a question of whether the sexbot is person-like enough to enable cheating to take place.

One approach is to consider the matter from the perspective of the human. If the human engaged in sexual activity with the sexbot regards them as being person-like enough, then the activity can be seen as cheating because they would believe they are cheating.  An objection to this is that it does not matter what the human thinks about the sexbot, what matters is its actual status. After all, if a human regards a human they are cheating with as an object, this does not mean they are not cheating. Likewise, if a human feels like they are cheating, it does not mean they really are.

This can be countered by arguing that how the human feels does matter. After all, if the human thinks they are cheating and they are engaging in the behavior, they are still acting wrongly. To use an analogy, if a person thinks they are stealing something and takes it anyway, they  have acted wrongly even if it turns out that they were not stealing. The obvious objection to this line of reasoning is that while a person who thinks they are stealing did act wrongly by engaging in what they thought was theft, they did not actually commit a theft. Likewise, a person who thinks they are engaging in cheating, but are not, would be acting wrongly in that they are doing something they think is wrong, but not cheating.

Another approach is to consider the matter objectively so that the degree of cheating would be proportional to the degree that the sexbot is person-like. On this view, cheating with a person-like sexbot would not be as bad as cheating with a full person. The obvious objection is that one is either cheating or not; there are no degrees of cheating. The obvious counter is to try to appeal to the intuition that there could be degrees of cheating in this manner. To use an analogy, just as there can be degrees of cheating in terms of the sexual activity engaged in, there can also be degrees of cheating in terms of how person-like the sexbot is.

While person-like sexbots are still the stuff of science fiction, I suspect the future will see some interesting divorce cases in which this matter is debated in court.

 

Due to the execution of a health insurance CEO, public attention is focused on health care. The United States has expensive health care, and this is working as intended to generate profits. Many Americans are uninsured or underinsured and even those who have insurance can find that their care is not covered. As has been repeatedly pointed out in the wake of the execution, there is a health care crisis in the United States and it is one that has been intentionally created.

Americans are a creative and generous people, which explains why people have turned to GoFundMe to get money for medical expenses. Medical bills can be ruinous and lead to bankruptcy for hundreds of thousands of Americans each year. A GoFundMe campaign can help a person pay their bills, get the care they need and avoid financial ruin. Friends of mine have been forced to undertake such campaigns and I have donated to them, as have many other people. In my own case, I am lucky and have a job that offers insurance coverage at a price I can afford, and my modest salary allows me to meet the medical expenses for a very healthy person with no pre-existing conditions. However, I know that like most of us,  I am one medical disaster away from financial ruin. As such, I have followed the use of GoFundMe for medical expenses with some practical interest. I have also given it some thought from a philosophical perspective.

On the one hand, the success of certain GoFundMe campaigns to cover such expenses suggests that people are morally decent and are willing to expend their own resources to help others. While GoFundMe does profit from these donations, their take is modest. They are not engaged in gouging people in need and exploiting medical necessities for absurdly high profits. That is the job of the health insurance industry.

On the other hand, there is the moral concern that in a wealthy country replete with billionaires and millionaires, many people must beg for money to meet their medical expenses. This spotlights the excessive cost of healthcare, the relatively low earnings of many Americans, and the weakness of the nation’s safety net. While those who donate out of generosity and compassion merit moral praise, the need for such donations merits moral condemnation. People should not need to beg for money to pay for their medical care. 

To anticipate an objection, I am aware that people do use GoFundMe for frivolous things and there are scammers, but my concern is with the fact that some people do need to turn to crowdfunding to pay their bills.

While donating is morally laudable, there are concerns about this method of funding. One practical problem is that it depends on the generosity of others. It is not a systematic and dependable method of funding. As such, it is a gamble to rely on it.

A second problem is that it depends on running an effective social media campaign. Like any other crowdfunding, success depends on getting attention and persuading people to donate. Those who have the time, resources and skills to run effective social media campaigns (or who have help) are more likely to succeed. This is concerning because people facing serious medical expenses are often in no condition to undertake the challenges of running a social media campaign. This is not to criticize or condemn people who can do this or recruit others. My point is that this method is no substitute for a systematic and consistent approach to funding health care.

A third problem is that success depends on the appeal of the medical condition and the person with that condition. While a rational approach to funding would be based on merit and need, there are clearly conditions and people that are more appealing in terms of attracting donors. For example, certain diseases and conditions can be “in” and generate sympathy, while others are not as appealing. In the case of people, we are not all equal in how appealing we are to others. As with the other problems, I do not condemn or criticize people for having conditions that are “in” or being appealing. Rather, my concern is that this method rests so heavily on these factors rather than medical and financial need. Once again, this serves to illustrate how the current system has been willfully broken and does not serve the needs of most Americans. While those who have succeeded in their GoFundMe campaigns should be lauded for their effort and ingenuity, those who run the health care system in which people have to run social media campaigns to afford their health care should be condemned.   

The execution of CEO Brian Thompson has brought the dystopian but highly profitable American health care system into the spotlight. While some are rightfully expressing compassion for Thompson’s family, the overwhelming tide of commentary is about the harms Americans suffer because of the way the health care system is operated. In many ways, this incident exposes many aspects of the American nightmare such as dystopian health care, the rule of oligarchs, the surveillance state, and gun violence.

As this is being written the identity and motives of the shooter are not known. However, the evidence suggests that he had an experience with the company that was bad enough he decided to execute the CEO. The main evidence for this is the words written on his shell casings (deny”, “depose”, and “defend”) that reference the tactics used by health insurance companies to avoid paying for care. Given the behavior of insurance companies in general and United Healthcare in particular, this inference makes sense.

The United States spends $13,000 per year per person on health care, although this is just the number you get when you divide the total spending by the total number of people. Obviously, we don’t each get $13,000 each year. Despite this, we have worse health outcomes than many other countries that spend less than half of what we do, and American life expectancy is dropping. It is estimated that about 85 million people are either without health care insurance or are underinsured.

It is estimated that between 45,000 and 60,000 Americans die each year because they cannot get access to health care on time, with many of these deaths attributed to a lack of health insurance. Even those who can get access to health care face dire consequences in that about 500,000 Americans go bankrupt because of medical debt. In contrast, health insurance companies are doing very well. In 2023, publicly traded health insurance companies experienced a 10.4% increase in total GAAP revenue reaching a total of $1.07 trillion. Thomson himself had an annual compensation package of $10.2 million.

In addition to the cold statistics, almost everyone in America has a bad story about health insurance. One indication that health insurance is a nightmare is the number of GoFundMe fundraisers for medica expenses. The company even has a guide to setting up your own medical fundraiser. Like many people, I have given to such fundraisers such as when a high school friend could not pay for his treatment. He is dead now.

My own story is a minor one, but the fact that a college professor with “good” insurance has a story also illustrates the problem. When I had my quadriceps repair surgery, the doctor told me that my insurance had stopped covering the leg brace because they deemed it medically unnecessary. The doctor said that it was absolutely necessary, and he was right. So, I had to buy a $500 brace that my insurance did not cover. I could afford it, but $500 is a lot of money for most of us.

Like most Americans, I have friends who have truly nightmarish stories of unceasing battles with insurance companies to secure health care for themselves or family. Similar stories flooded social media, filling out the statistics with the suffering of people. While most people did not applaud the execution, it was clear that Americans hate the health insurance industry and do so for good reason. But is the killing of a CEO morally justified?

There is a general moral presumption that killing people is wrong and we rightfully expect a justification if someone claims that a killing was morally acceptable. In addition to the moral issue, there is also the question of the norms of society. Robert Pape, director of the University of Chicago’s project on security and threats, has claimed that Americans are increasingly accepting violence as a means of settling civil disputes and that this one incident shows that “the norms of violence are spreading into the commercial sector.” While Pape does make a reasonable point, violence has long been a part of the commercial sector although this has mostly been the use of violence against workers in general and unions in particular. Gun violence is also “normal” in the United States in that it occurs regularly. As such, the killing does see to be within the norms of America, although the killing of a CEO is unusual.

While it must be emphasized that the motive of the shooter is not known, the speculation is that he was harmed in some manner by the heath insurance company. While we do not yet know his story, we do know that people suffer or die from lack of affordable insurance and when insurance companies deny them coverage for treatment.

Philosophers draw a moral distinction between killing and letting people die and insurance companies can make the philosophical argument that they are not killing people or inflicting direct harm. They are just letting people suffer or die for financial reasons when they can be helped. When it comes to their compensation packages, CEOs and upper management defend their exorbitant compensation by arguing that they are the ones making the big decisions and leading the company. If we take them at their word, then this entails that they also deserve the largest share of moral accountability. That is, if a company’s actions are causing death and suffering, then the CEO and other leadership are the ones who deserve a package of blame to match their compensation package.

It is important to distinguish moral accountability from legal accountability. Corporations exist, in large part, to concentrate wealth at the top while distributing legal accountability. Even when they commit criminal activity, “it’s rare for top executives – especially at larger companies – to face personal punishment.” One reason for this is that the United States is an oligarchy rather than a democracy and the laws are written to benefit the wealthy. This is not to say that corporate leaders are above the law; they are not. They are wrapped in the law, and it generally serves them well as armor against accountability. For the lower classes, the law is more often a sword employed to rob and otherwise harm them. As such, one moral justification for an individual using violence against a CEO or other corporate leader is that might be the only way they will face meaningful consequences for their crimes.

The social contract is supposed to ensure that everyone faces consequences and when this is not the case, then the social contract loses its validity. To borrow from Glaucon in Plato’s Republic, it would be foolish to be restrained by “justice” when others are harming you without such restraint.  But it might be objected, while health insurance companies do face legal scrutiny, denying coverage and making health care unaffordable for many Americans is legal. As such, these are not crimes and CEOs, and corporate leaders should not be harmed for inflicting such harm.

While it is true that corporations can legally get away with letting people die and even causing their deaths, this is where morality enters the picture. While there are philosophical views that morality is determined by the law, these views have many obvious problems, not the least of which is that they are counterintuitive.

If people are morally accountable for the harm they inflict and can be justly punished and the legal system ignores such harm, then it would follow that individuals have the moral right to act. In terms of philosophical justification, John Locke provides an excellent basis. If a corporation can cause unjustified harm to the life and property of people and the state allows this, then the corporations have returned themselves and their victims to the state of nature because, in effect, the state does not exist in this context. In this situation, everyone has the right to defend themselves and others from such unjust incursions and this, as Locke argued, can involve violence and even lethal force.

It might be objected that such vigilante justice would harm society, and that people should rely on the legal system for recourse. But that is exactly the problem: the people running the state have allowed the corporations to mostly do as they wish to their victims with little consequence and have removed the protection of the law. It is they who have created a situation where vigilante justice might be the only meaningful recourse of the citizen. To complain about eroding norms is a mistake, because the norm is for corporations and the elites to get away with moral crimes with little consequence. For people to fight back against this can be seen as desperate attempts at some justice.

As the Trump administration is likely to see a decrease in even the timid and limited efforts to check corporate wrongdoing, it seems likely there will be more incidents of people going after corporate leaders. Much of the discussion among the corporations is about the need to protect corporate leaders and we can expect lawmakers and the police to step up to offer even more protection to the oligarchs from the people they are hurting.

Politicians could take steps to solve the health care crisis that the for-profit focus of health care has caused and some, such have Bernie Sanders, honestly want to do that. In closing, one consequence of the killing is that Anthem decided to rescind their proposed anesthesia policy. Anthem Blue Cross Blue Shield plans representing Connecticut, New York and Missouri had said they would no longer pay for anesthesia care if a procedure goes beyond an arbitrary time limit, regardless of how long it takes. This illustrates our dystopia: this would have been allowed by the state that is supposed to protect us, but the execution of a health insurance CEO made the leaders of Anthem rethink their greed. This is not how things should be. In a better world Thompson would be alive, albeit not as rich,  and spending the holidays with his family. And so would the thousands of Americans who died needlessly because of greed and cruelty.

 

While pharmaceutical companies profited from flooding America with opioids, this inflicted terrible costs on others. Among the costs has been the terrible impact on health. One example of this is endocarditis.

Endocarditis is an abscess on a heart valve. While not limited to drug users, it can be caused by injecting opioids. As opioids were pushed onto the American people, it is no surprise that the number of drug users suffering from endocarditis increased significantly.  The treatment of endocarditis involves a very expensive surgery and many drug users getting this surgery are on Medicaid. To make matters worse, people often return to opioid use after the surgery and this can lead to another expensive surgery, paid for by Medicaid. This raises moral concerns.

There is the moral issue of whether Medicaid should even exist. On the one hand, a compelling moral argument can be made that just as a nation provides military and police protection to citizens who cannot afford their own security forces or bodyguards, a nation should fund medical care for those who cannot afford it on their own. On the other hand, a moral argument can be made that a nation has no obligation to provide such support and that citizens should be left to fend for themselves regarding health care. Naturally enough, if the nation is under no obligation to provide Medicaid in general, then it is under no obligation to cover the cost of the surgery in question. On this view, there is no need to consider the matter further.

 However, if the state should provide Medicaid, then the issue of whether the state should pay for endocarditis surgery for opioid addicts arises. It is to this discussion that I now turn.

While it is harsh to argue against paying for an addict’s heart surgery, a moral case can be made in favor of this position. The most obvious way to do this is on utilitarian grounds. As noted above, surgery for endocarditis is very expensive and uses financial and medical resources that could be used elsewhere. If more good could be done by using these resources elsewhere, the utilitarian conclusion is that this is what should be done. This argument can be strengthened by including the fact that addicts often return to behavior that resulted in endocarditis, thus creating the need for repeating the costly surgery. From a utilitarian perspective, it would be morally better to use those resources to treat patients who are less likely to willfully engage in behavior that will require them to be treated yet again. This is because the resources that would be consumed treating and retreating a person who keeps inflicting harm on themselves could be used to treat many people, thus doing greater good for the greater number. Though harsh and seemingly merciless, this approach seems justifiable on grounds like the moral justification for triage.

Another approach, which is even harsher, is to focus on the fact that the addicts are giving themselves endocarditis and sometimes doing so repeatedly. This provides the basis for two arguments against public funding of their treatment.

One argument can be built around the idea that there is no moral obligation to help people when their harm is self-inflicted. To use an analogy, if a person insists on setting fire to their house and it burns down, no one has a moral responsibility to pay to have their house rebuilt. Since the addict’s woes are self-inflicted, there is no moral obligation on the part of others to pay for their surgery and forcing people to do so (by using public money) would be like forcing others to pay to rebuild the burned house.

One way to counter this is to point out that many health issues are self-inflicted by a lack of positive behavior (such as exercise and a good diet) and an abundance of negative behavior (such as smoking, drinking, or having unprotected sex). If this principle is applied to addicts, it must be applied to all cases of self-inflicted harm. While some might take this as a refutation of this view, others might accept this as reasonable and warranting a state of nature approach to medicine in which everyone is on their own.

Another argument can be built around the idea that while there could be an obligation to help people, this obligation is limited. In this case, if a person is treated and knowingly returns to the same harmful behavior, then there is no obligation to keep treating the person. In the case of the drug addict, it could be accepted that the first surgery should be covered and that they should be educated on what will happen if they persist in their harmful behavior. If they then persist in that behavior and need the surgery again, then public money should not be used. To use an analogy, if a child swings their ice cream cone around and is surprised when the scoops hit the ground, then it would be reasonable for the parents to buy the child another cone. If the child swings the new cone around and the scoops hit the ground, then the child can be justly denied another cone.

An obvious counter is to contend that addicts are addicted and hence cannot be blamed for returning to the behavior that caused the harm. They are not morally responsible because they cannot do otherwise. This does have some appeal but would seem to justify requiring addicts to also undergo treatment for their addiction and to agree to monitoring of their behavior. They should be free to refuse this (which, ironically, assumes they are capable of free choice), but this should result in their being denied a second surgery if their behavior results in the same harm. Holding people accountable does seem to be cruel, but it could be argued that the alternative is unfair to other citizens. It would be like requiring them to keep rebuilding houses for a person who persists in setting fires in their house and refuses to takes steps to stop doing this.

These arguments can be countered by arguing that there is an obligation to provide such care regardless of how many times an addict returns to the behavior that caused the need for the surgery. One approach would be to build an analogy based on how the state repeatedly bails out big businesses every time they burn down the economy. Another approach would be to appeal to the value of human life and contend that it must be preserved regardless of the cost and regardless of the reason why there is a need for medical care. This approach could be noble or, perhaps, foolish.

 

While industrial robots have been in service for a while, household robots have largely been limited to floor cleaning machines like the Roomba. But Physical Intelligence has built a robot that seems capable of doing some household tasks such as folding clothes. While a viable commercial products lie in the future, the dangers of household robots should be considered now. I will skip over the usual fear of the robot rebellion in which the machines turn against humans and focus on more likely dangers.

Like a PC or phone, a household robot runs the risk of software errors, glitches and other problems. While having an app crash on your phone or PC can be annoying, this usually does not put you at risk of physical harm. However, a malfunctioning household robot can be a danger. A viable household robot needs to be strong enough to engage in tasks such as cleaning, folding laundry, and moving objects. This entails that the robot will be strong enough to harm humans and pets. If a robot has a software or hardware issue that interferes with its ability to recognize objects and living creatures, it might try to fold a baby’s clothing while the baby is wearing them or mistake a sleeping cat for clothing or trash and put them in the washing machine or garbage can. Even more concerning is a robot designed to prepare food that misidentifies, for example, a human or pet as the meat to be sliced up and cooked for dinner.

Even laying aside such errors, a home can be a complicated place for a robot to operate in, as there will usually be multiple rooms, different types of furniture, different appliances, as well as various people and pets. This means that a household robot could easily become a hazard (or just useless) simply because of an inability to handle such a complicated and changing environment.

To be fair, these challenges can be addressed in various ways. One option is to limit robots to specific tasks and narrow areas of operation. This might require multiple robots in a home, each assigned to a specific area and set of tasks. For example, a knife wielding kitchen robot might have a fixed location in the kitchen and only be able to slice foods placed within a special  box. As another example, a laundry robot might be confined to a laundry room. Another way to reduce risk is through programming and hardware safeguards. For example, pets and humans might wear devices that provide household robots with their exact location so they can avoid them. This way the robot would not need to depend on visually distinguishing, for example, a cat from a sweater. While things could still go wrong (the ID tag might fail or fall off your cat’s collar), people are generally willing to accept some risk of injury and death for convenience. After all, any electrical appliance in your home can probably kill you and driving anywhere comes with the risk of injury or death. In addition to concerns about accidental injuries, there is also the threat of intentionally caused harm.

Household robots will almost certainly have online connections. On the one hand, this has many potential benefits such as being able to check in on your robots and taking manual control if, for example, one gets stuck in a corner. On the other hand, if you can access your robots online, that means that bad actors can do so as well, just as can happen today with any connected device. The critical difference is that a connected robot in your house means that a bad actor can gain a virtual physical presence in your home and use your robot in various ways.

It is certain that some people will take control of other peoples’ robots just for fun, to do various pranks such as having a robot move things around or make a small mess. But compromised robots could be used for a range of misdeeds, such as unlocking doors (although connected smart locks are obviously vulnerable), grabbing valuables and tossing them out windows, breaking things, and even attacking people and pets. This threat can be mitigated by good security practices but the only two ways to avoid a compromised robot is to either not have it connected or not have one at all.

As with autonomous vehicles, household robots also raise legal concerns about liability. If, for example, your robot injures a guest, there is the question of who has legal responsibility. On the plus side, household robots will be good for some lawyers as this will create a new, profitable subfield of law.

In closing, while the idea of having household robots seems appealing, their presence would create a new set of dangers, especially if they are connected and can be compromised.

 

In utopian science fiction, machines free humans so they can enjoy a life of leisure and enlightenment. In dystopian stories, machines enslave or exterminate humans. Reality has been, on average, a middletopia: a mean between the worst possible world and the best possible world. But a good case can be made that reality is more of a dystopia-lite; a bad world, but better than a full dystopia. While people still dream of utopia, there are those who are working hard to push us further into dystopia.

On a positive note, robots have replaced humans in some jobs that are dirty, dull, or dangerous. In some cases, the displaced humans have moved on to better jobs. In other cases, they have moved into other dirty, dull or dangerous jobs to wait for the machines to replace those jobs.  Machines have also replaced humans in jobs humans see as desirable and AI companies are determined to continue that trend, having selected writing and art as prime targets This leads to questions about what jobs will be left to humans and which will be taken over by the machines

There was once the intuitively appealing view that “creative” jobs would be safe from machines, but physical labor would be easily taken over by machines. On this view, machines will replace jobs such as those held by warehouse pickers, construction workers and janitors. Artists, philosophers, and teachers were supposed to be safe from the machine revolution. In some cases, the intuitive view was correct. Machines are routinely used for physical labor such as constructing cars and robot Socrates has yet to show up. However, the intuitive view about creative tasks is under attack as AI is used in journalism, law, academics and image creation. There are also tasks that would seem easy to automate, such as cleaning toilets or doing construction, that are very hard for robots, but easy for humans.

An example of a task that would seem ideal for automation is warehouse picking, especially of the sort done by Amazon. Amazon and other companies have automated some of the process, making use of robots in various tasks. But humans are still a critical part of the picking process. Since humans tend to have poor memories and get bored with picking, human pickers are “remote controlled” by computers that tell them what to do, then they tell the computers what they have done. For example, a human might be directed to pick five boxes of acne medicine, then five more boxes of acne medicine, then a copy of Fifty Shades of Gray and finally an Android phone. Humans are very good at picking and dealing with things like a broken bottle of shampoo in a box that robots still handle poorly.

In this sort of warehouse, the humans are being controlled by the machines. The machines take care of the higher-level activities of organizing orders and managing, while the human brain handles the task of selecting the right items and dealing with some tasks the machines cannot handle. While selecting seems simple, this is because it is simple for humans but not for existing robots. We are good at recognizing, grouping and distinguishing things and have the manual dexterity to perform the picking tasks, thanks to our opposable thumbs. Unfortunately for the human worker, these picking tasks are probably not very rewarding, creative or interesting and this is exactly the sort of drudge job that robots are supposed to free us from.

While computer-controlled warehouse work is one example of humans being directed by machines, it is easy to imagine this approach applied to tasks that require manual dexterity and what might be called “animal skills” such as object recognition. It is also easy to imagine this approach extended far beyond these jobs as a cost-cutting measure.

One way this approach could cut costs would be by allowing employers to buy “skilled” AI systems and use them to direct unskilled human labor. For simple jobs, a human might be directed via a headset linked to the AI that tells the human what to do, providing the “intelligence” guiding the body. For more complex jobs, a human might wear a VR style helmet with a machine directing the human via augmented reality. For example, an unskilled human could be walked through electrical or plumbing work by an AI. It should be noted that this technology could also be useful for people doing DIY projects and someday a person might be able to rent skills (via AI) as they now rent tools. But this could also impact the labor market, especially if almost anyone could use the technology effectively.

In this system, humans would provide the manual dexterity and all those highly evolved physical capacities. The AI would provide the direction, skill and “intelligence.” Since any adequately functional human body would suffice to serve the controlling AI, the value of such human labor would be low, and wages would match this value. Workers would be easy to replace because if a worker is fired or quits, then a new worker can simply don the interface device and get about the task with little training. This would also save in education costs as AI directed laborer would not need much education in job skills as these are by the AI. Humans would just need the basis skills allowing them to be directed properly by AI. This does point towards a dystopia in which human bodies are driven around through the workday by AI, then released and sent home in driverless cars. One could even imagine this technology being used in education: a human body providing an in-person presence while an AI directs the teaching process.

The employment of humans in these roles would only continue if humans were the cheapest form of available labor. If advances allow robot bodies to do these tasks cheaper, then it would make business sense to replace humans completely.  Alternatively, biological engineering might lead to the production of cost-effective engineered life forms that can replace humans; perhaps a pliable primate that is just smart enough to be directed by the AI. But not human enough to be considered a slave. Or, to go deeper into dystopia, perhaps a cyborg will be built that has hardware in place of the higher parts of the brain and thus serves as a meat robot driven around the job by the AI that is using the evolved biological features that cannot be replicated cost-effectively by machinery. While such things remain science fiction, now is the time to start considering the laws and policies that should govern remote controlled humans in the workplace.

 

One of the many fears about AI is that it will be weaponized by political candidates. In a proactive move, some states have already created laws regulating its use. Michigan has a law aimed at the deceptive use of AI that requires a disclaimer when a political ad is “manipulated by technical means and depicts speech or conduct that did not occur.”  My adopted state of Florida has a similar law that political ads using generative AI requires a disclaimer. While the effect of disclaimers on elections remains to be seen, a study by New York University’s Center on Technology Policy found that research subjects saw candidates who used such disclaimers as “less trustworthy and less appealing.”

The subjects watched fictional political ads, some of which had AI disclaimers, and then rated the fictional candidates on trustworthiness, truthfulness and how likely they were to vote for them. The study showed that the disclaimers had a small but statistically significant negative impact on the perception of these fictional candidates. This occurred whether the AI use was deceptive or more harmless. The study subjects also expressed a preference for using disclaimers anytime AI was used in an ad, even when the use was harmless, and this held across party lines. As attack ads are a common strategy, it is interesting that the study found that such ads with an AI disclaimer backfired, and the study subjects evaluated the target as more trustworthy and appealing than the attacker.

If the study results hold for real ads, these findings might serve to deter the use of AI in political ads, especially attack ads. But it is worth noting that the study did not involve ads featuring actual candidates. Out in the wild, voters tend to be tolerant of lies or even like them when the lies support their political beliefs. If the disclaimer is seen as stating or implying that the ad contains untruths, it is likely that the negative impact of the disclaimer would be less or even nonexistent for certain candidates or messages. This is something that will need to be assessed in the wild.

The findings also suggest a diabolical strategy in which an attack ad with the AI disclaimer is created to target the candidate the creators support. These supporters would need to take care to conceal their connection to the candidate, but this is easy in the current dark money reality of American politics. They would, of course, need to calculate the risk that the ad might work better as an attack ad than a backfire ad. Speaking of diabolical, it might be wondered why there are disclaimer laws rather than bans.

The Florida law requires a disclaimer when AI is used to “depict a real person performing an action that did not actually occur, and was created with the intent to injure a candidate or to deceive regarding a ballot issue.” A possible example of such use seems to occur in an ad by DeSantis’s campaign falsely depicting Trump embracing Fauci in 2023.   It is noteworthy that the wording of the law entails that the intentional use of AI to harm and deceive in political advertising is allowed but merely requires a disclaimer. That is, an ad is allowed to lie but with a disclaimer. This might strike many as odd, but follows established law.

As the former head of the FCC under Obama Tom Wheeler notes, lies are allowed in political ads on federally regulated broadcast channels. As would be suspected, the arguments used to defend allowing lies in political ads are based on the First Amendment. This “right to lie” provides some explanation as to why these laws do not ban the use of AI. It might be wondered why there is not a more general law requiring a disclaimer for all intentional deceptions in political ads. A practical reason is that it is currently much easier to prove the use of AI than it is to prove intentional deception in general. That said, the Florida law specifies intent and the use of AI to depict something that did not occur and proving both does present a challenge, especially since people can legally lie in their ads and insist the depiction is of something real.

 Cable TV channels, such as CNN, can reject ads. In some cases, stations can reject ads from non-candidate outside groups, such as super PACs. Social media companies, such as X and Facebook, have considerable freedom in what they can reject. Those defending this right of rejection point out the oft forgotten fact that the First Amendment legal right applies to the actions of the government and not private businesses, such as CNN and Facebook. Broadcast TV, as noted above, is an exception to this. The companies that run political ads will need to develop their own AI policies while also following the relevant laws.

While some might think that a complete ban on AI would be best, the AI hype has made this a bad idea. This is because companies have rushed to include AI in as many products as possible and to rebrand existing technologies as AI. For example, the text of an ad might be written in Microsoft Word with Grammarly installed and Grammarly is pitching itself as providing AI writing assistance. Programs like Adobe Illustrator and Photoshop also have AI features that have innocuous uses, such as automating the process of improving the quality of a real image or creating a background pattern that might be used in a print ad.  It would obviously be absurd to require a disclaimer for such uses of AI.

For more on cyber policy issues: Hewlett Foundation Cyber Policy Institute (famu.edu)